Save Returns New Object - version 2.1 change?

Save Returns New Object - version 2.1 change?

Old forum URL: forums.lhotka.net/forums/t/535.aspx


msk posted on Tuesday, July 04, 2006

In the current version of the framework BusinessBase.Save returns a new object.  All controls must rebind because they are bound to the old object (prior to the save - and changes may have occurred during the save).  Recently I was looking at the code behing the N-level undo functionality (state stacking) - it basically sucks the data out of an object and puts it back in later.  I can't see why this wouldn't work within BusinessBase.Save.  The Save method could use the 'undo code' to overwrite the objects state with the state of the new object returned from DataPortal.Update. 

Currently UI code works like this:

'Unbind...

obj = obj.Save()

'Rebind...

The client code would be simplifed to:

obj.Save()

(OK that only differs by return type - but you get the idea)

They may be something I have missed, but I see no reason why this shouldn't be in the framework.  The only downside is performance, but just like n-level undo, the choice to use it could be given to the business or UI developer. 

Does saving the Clone to effect an atomic Save operation fall into the same category as this?

Martin

karl replied on Wednesday, July 05, 2006

What???? I've never heard anything so ridiculous. I'am sure that if what you say was possible, Rocky would have implemented it in version 2.0 of the framework!

JonM replied on Wednesday, July 05, 2006

I'm not sure about the CSLA 2.0, but it has always worked this way in the 1.x versions. (at least in dataportal mode)

msk replied on Thursday, July 06, 2006

Jon, Not sure I made myself clear.  I am saying that Save returning a new object is bad.  You wouldn't expect to save a word document and then find out that the one you have doesn't work right and you have to open another, so why so with business objects?

Karl, You're an idiot but thanks for the reply, I thought no one was going to post there for a while.  I can only assume that everyone agrees and Rocky is busy adding the functionality to the framework.  Perhaps the reflections optimizations mentioned on another thread will lessen the performance overhead of such a change. 

karl replied on Thursday, July 06, 2006

Martin, I don't think your criticism of Rocky is justified. I am sure that Rocky has thought long and hard about the design of the framework. If Rocky has decided that

 obj = obj.Save()

is the way that the framework should work, then I am with him. Are you suggesting that you are in some way superior to Rocky, I bet you are one of those people who spend all of their time criticising other people and not getting on with the real project work that desperately needs completing.

 

msk replied on Thursday, July 06, 2006

Karl, You have certainly livened up this thread with you're intelligent comments, Thankyou.  It's OK though, I like Rocky too, you don't need to get upset.  And yes I would much rather have a job 'criticising' a framework than doing real work :)  I was hoping to get some discussion of whether my thoughts on the change were a good idea.  Having to re-bind to the new object is something that has occaisonally caught me out  - on the rare occaision I do real work.  If my change is workable it would mean less code in the UI to go wrong.  I'm probably too lazy to do the work myself, plus somebody else would likely make a better job of it.  That's why I suggested it here.  If CslaContrib gets going I'll consider adding 'SuperiorBusinessBase' to the project ;)

 

Allann replied on Thursday, July 06, 2006

Hi all,

The Save returning a new object is not the only reason for stopping binding.

Extract from Rocky's book, "if the data portal is configured to run locally in the client process. In that

case, the object is not serialized to a server, but is rather updated in place on the client. It is possible

that the business object could raise PropertyChanged or ListChanged events while it is being updated,

causing the UI to refresh during the data update process. Not only does that incur performance

costs, but sometimes code in the UI might respond to those events in ways that cause bugs.".

I suggest purchasing his book (you can get a beta pdf version from Apress) and see all the reasons for the design.  I agree with the approach taken even though it may not be obvious initially.  Think about what happens to an object that is sent via MSMQ (which is possible), it could take days for a response to come back.

Regards

Allan

P.S.  Kark, questioning the way things are done is the best way to advance (see extreme programming articles) design and further improve ANY application.  If Rocky had given us the PERFECT solution he wouldn't be working on v2.1.

msk replied on Thursday, July 06, 2006

Hi Allan,

OK I should have been more verbose in the example.  I said "rebind" not "stop binding".  I accept it is desireable to stop databinding events, but the key point I was trying to make is that Csla is design to allow (amongst other things) the production of object oriented location transparent applications.  I believe the Save method is lacking in the object oriented area.  Save returning a new object (in some circumstances) was introduced when Csla moved to .NET.  Rocky describes the reason why on page 79:

Due to the way .NET passes objects by value, it may introduce a bit of a wrinkle into the overall

process. When passing the object to be saved over to the server, .NET makes a copy of the object

from the client onto the server, which is exactly what is desired. However, after the update is complete,

the object must be returned to the client. When an object is returned from the server to the

client, a new copy of the object is made on the client, which isn’t really the desired behavior.

The workaround is describe a little after that:

The UI has a reference to the business object and calls its Save() method. This causes the business

object to ask the data portal to save the object. The result is that a copy of the business object

is made on the server, where it can save itself to the database. So far, this is pretty straightforward.

However, once this part is done, the updated business object is returned to the client, and the

UI must update its references to use the newly updated object instead, as shown in Figure 2-19.

This is fine, too—but it’s important to keep in mind that you can’t continue to use the old business

object; you must update all object references to use the newly updated object.

I have no doubt that Rocky has considered a fix similar to the one I suggest, but chosen not to include it in the framework.  Possible reasons are:

1) Performance overhead

2) Code complexity - books can only be so big

3) My suggestion is flawed

or 4) The UI code workaround is a reasonable compromise

The book suggests the following sequence of events when a UI saves a business object:

1. Turn off events from the BindingSource controls.

2. Clone the business object.

3. Save the clone of the business object.

4. Rebind the BindingSource controls to the new object returned from Save(), if necessary.

5. Turn on events from the BindingSource controls.

I am suggesting that steps 2, 3 and 4 are workarounds because of .NET serialization and the fact the call to DataPortal.Update is not an atomic operation (it may fail part way through and leave the object in a different state to before it was run).  In my original post, I was trying to suggest that these UI workarounds could be avoided by modifying the BusinessBase.Save function.  Steps 2, 3 and 4 would then consist of a single step: obj.Save(). 

Currently if you're UI code fails to implement the steps as described in the book, it may not work correctly if reconfigured to run data access remotely and may not be able to retry after a failed call to the Save method. 

Allan, I not sure your statement about MSMQ is relevant.  Agreed, you could implement an MSMQ transport for CSLA (and I am sure people have - the original CSLA book had an MSMQ implementation).  If you wanted to convey server changes to the business object back to the client in such an implementation, then the Save method or rather the undlying call to DataPortal.Update  would have to block until an MSMQ message conveys the object in it's new state back from the dataportal host.  After all CSLA is about mobile objects not send one object one way and another will come back. 

DansDreams replied on Friday, July 07, 2006

msk, your thoughts are not ridiculous.  That kind of model has in fact been given serious consideration by veteran CSLA users and Rocky himself.

RockfordLhotka replied on Friday, July 07, 2006

Yes, absolutely! Returning a new object graph is not ideal.

However, the solution is non-trivial. You are right that n-level undo contains the core of what is needed, but in the general case it is entirely insufficient. Remember that CSLA .NET is designed to enable the concept of mobile objects - and a key part of that is allowing arbitrarily complex object graphs to move across the network.

The code in UndoableBase is limited in that it only deals with CSLA-derived objects - all other objects are blindly serialized/deserialized using the BinaryFormatter. Which means that a CancelEdit can, and does, result in new instances of any non-CSLA-derived objects in your object graph. Not that this happens all that often, but it is a very real effect.

To make this really work, I'd need to write a complete replacement for the BinaryFormatter that did in-place deserialization. Which, btw, is not possible for arbitrary types, but would be possible for CSLA-derived types - much like n-level undo.

And I went down this road at one point - primarily in an effort to support the Compact Framework, since the CF doesn't have a BinaryFormatter equivalent. The result is a somewhat-working-but-mostly-useless XmlFormatter (http://www.lhotka.net/Articles.aspx?id=5446d9e2-4b08-4bd6-9c29-59b166912259).

It turns out that writing a serializer is pretty easy. Writing a deserializer is really, really, really, ..., really hard. This is because you get into some really nasty circular reference loops that must be resolved by scanning through objects and doing a "fix-up" process. I was not able to fully resolve some of those issues in my serializer, which is why it doesn't actually work. Very recently I was in Oslo at a conference and found the solution - which is that there's a method in the .NET framework that allows you to create instance of objects without running their constructors.

Using that technique, you can create object "instances" and then come back later and run a selected constructor - which is ultimately what the BinaryFormatter is doing to solve this issue.

Of course now WCF is coming, along with its own new serializer - the NetDataContractSerializer - that works like the BinaryFormatter overall. Which means that WCF doesn't fix the problem, but does complicate things, because it has new rules and attributes to control serialization.

And that makes me very glad I didn't go down the road of writing (and thus maintaining/supporting) my own serializer. It will be hard enough to alter CSLA .NET to fully enable the use of NetDataContractSerializer...

So, to make a long story short, the idea of doing in-place Save() is great. Actually solving the problem, while supporting arbitrary object graphs, is incredibly difficult. The only realistic solution would be to only support CSLA-derived objects in the entire object graph, and that is a restriction I've been unwilling to impose.

SonOfPirate replied on Friday, July 07, 2006

I don't know that the solution is limited to serialization.  But, I also admit that our "fix" for this issue is just as undesirable.  I agree that it would be ideal if a "perfect" solution came along, but we are all bound by what is available and makes sense for us to implement.

Our work-around for what we termed the "object reset" problem (resetting all of the references after an insert or update) is far from perfect but is serving our purposes.  Up to you if it helps you.

Our biggest concern with the issue stemmed from widely-used objects.  If we were updating an object that was referenced by only one other object, a parent or binding control, etc., that would not have been that big of a deal.  But when you start dealing with complex object models with many inter-related objects, keeping track of who has what reference to which object so that we can re-reference them to the new object had us all sitting back in our chairs with our hands in our hair  (isn't that the job of the runtime)?!?!?  Anyway, I will spare you all the torture of trying to provide an example as I think it would only distract from the topic by being torn apart one way or another rather than focusing on the point of the posting.

The first step in working around this was to recognize WHY we needed the updated copy of the object anyway.  Why is the Insert or Update operation returning anything?  The nature of these operations is to persist changes made in our application to our data store, does it really require a two-way conversation?  In many cases this is because the object's data has been manipulated somehow during the operation and we need to "restore" our in-memory copy so that it matches the data in our data store.  Weeellll, if you can eliminate this aspect, then you eliminate the need to return an object.  Using a GUID for your table's primary key, generated by your object when instantiated means that you aren't relying on SQL's @@IDENTITY value, for instance.  And, not using timestamp columns eliminates the need to retrieve db created datetime values. 

Again, these measures may or may not work for you and may not work in all cases.  But it is a start.

To address the fact that we can't guarantee that we aren't going to have some value in returning data that needs to be "restored" in our object, we took a cue from how MS implemented some similar features in the web controls area.  Specifically, the MergeWith(...) concept (see the Style object for an example).  This method is expected to accept an object of the same type and merge its properties with the current object's.  Applying this concept to our data access code, we have implemented a simple, protected, virtual MergeWith method in our objects that accepts the object returned from the data portal and passes it through the inheritance chain (through overrides) allowing any properties that have possibly been affected by the Insert or Update to update themselves from the returned object.  the end result is that the original object is now current, reflects the data in the data store and all references to it still valid.  In addition, by using the property's accessors to reset the value, the same events are raised for changes as if the value was changed during other operations, allowing the bound controls to update as desired.  The down side is that we've added processing time "copying" property values from the returned object to the current object, especially if there are many to update.

Again, just a glimpse at how we've addressed the same concerns which I believe was the intent of the original post.  Not saying anyone else is right or wrong, just conveying how we handled this.  Hope it helps in some way.

 

btw - I did read the book.

RockfordLhotka replied on Friday, July 07, 2006

Nice info Pirate - thanks!
 
Rocky

ScottBessler replied on Tuesday, July 11, 2006

RockfordLhotka:

there's a method in the .NET framework that allows you to create instance of objects without running their constructors.



Can you elaborate on this?    It sounds useful for some XML de/serialize stuff I was writing.

Thanks,
Scott

vargasbo replied on Tuesday, July 11, 2006

I was thinking about what everyone wrote and it seems like the simple thing to do is, return the criteria you used to identify your object. In most cases, you have three scenarios

1) you uses a GUID
2) Identity ID
3) Composite Key

1) When you save the first case, all you really need to know is that the object saved, so their not point to get data back from the database, since it’s just a reflection of what you already have.

2) If you’re using an identity id, then just return the value…
Something like obj.ID = ExecuteNonQuery (save values)

 3) Assuming you’re using your parent object, and then it’s computing your next id for this child object and setting your parent object id.

 Obj.2ID = ExecuteNonQuery (save values);

Obj.PID = _ParentID

Both of which would be part of criteria. I’m sure most senior engineers have thought of this, but I thought I would throw that out there.

SonOfPirate replied on Tuesday, July 11, 2006

You are correct except that the statement:

obj.ID = ExecuteNonQuery(...)

is performed on the server.  So, the returned ID value is present in the COPY of your object on the server.  It is this COPY that is returned by the data portal in the ResultObject property of the DataPortalResult.  When the data portal method that initiated all of this returns, it bring this COPY along with it.  This discussion has to do with getting the information, such as the updated ID value you mentioned, from this copy into the original object on the client.

Obviously, in an example like this it is not too hard to simply copy the ID value from the returned object into the original.  But if the returned value had more properties that needed to be updated this could be hairy.  We've implemented this type of fix except that we delegate to a virtual MergeWith method to handle the copying.

 

vargasbo replied on Tuesday, July 11, 2006

Ok, I'll agree if you have to update tons of properties copy would be painful, but if you're updating 25, I'm assuming you already have them in your object, since you just updated them to the database. All we have to do is place the timestamp to make sure concurrecy is maintained.

Then again, I've only been doing this for 10 years and haven't seen every type of framework out there, just my thoughts :-)

RockfordLhotka replied on Tuesday, July 11, 2006

As I've said before - the issue isn't that hard if you restrict the object graphs to only contain CSLA-derived objects. But if you allow for arbitrary objects the issue becomes very complex, because you lose a great deal of control.

And yet you might want an arbitrary object. Perhaps you need to use something interesting like a hashtable, or queue or stack? Or you want to use someone else's serializable object, so you simply reference it from a BusinessBase-derived object?

The point is that it is pretty easy to envision object graphs that aren't pure CSLA - and then you've got serious complexity.

For pure CSLA object graphs you could envision a simplified serializer that runs on the client, creating a "datagram" to send to the server for persistence. It would only send the changed data. Of course at this point you've abandoned mobile objects! You couldn't rehydrate the object graph on the server, because you wouldn't have all the data.

Conversely, this approach would allow for a "resultgram" containing only the data changed on the server. The client-side persistance engine would parse that data and do appropriate updates to the client-side object graph. Again, there's no concept of mobile objects here!

I guess what I'm saying is this: the data portal is designed to enable one concept - mobile objects. It is a powerful and useful concept in many cases.

You could easily envision another type of data portal, designed to enable client/server persistence by talking to a server-hosted set of data services. That is also a powerful and useful concept in many cases. But it IS NOT what the data portal is designed to do.

As I said in an earlier post, to make the mobile object concept work with in-place objects, would require an in-place deserialization process. There isn't one in .NET, and there's not one coming in WCF (thus for the forseeable future).

However, CSLA could totally work with a replacement "data portal" concept that didn't enable mobile objects. You could write such a thing as I just described - with datagram/resultgram semantics, and which would communicate with a set of data services (not business objects!) on the server. If I were to do such a thing, it would still have a single entry point, and a channel adapter and a message router - in other words, it would look much like the existing data portal. But rather than moving objects back and forth, it would move these datagrams.

This isn't something I have a great deal of interest in doing, so I wouldn't expect to see it coming in a near-future CSLA. Never say never, but it just isn't nearly as interesting to me as going the other direction and creating a true "object portal" that enables mobile objects in a far more flexible manner than the data portal does today. (not that I'm convinced most people need an object portal - just that it is what I find truly interesting and fun Wink [;)]).

Igor replied on Sunday, July 09, 2006

Hi everyone,

 

I’ve got an idea of how to handle replacing references in business objects instead of clients. I would try to use the classic Proxy pattern: when a client requests a reference to a BO, it is actually given a reference to a proxy object (the client is not aware that it has got just a proxy, because the proxy has a proper business-meaningful name). The proxy class contains a private instance of another (real) class (which definition is nested inside the proxy class). The outer (proxy) and inner (real) classes have the same interfaces, but different implementations: public members of the proxy just pass client calls through to the real object (which does the real work). In case of the Update call the private instance of the real class gets replaced inside the proxy, but the clients continue referencing the same proxy object.

 

The design works in simple (not comprehensive) tests. But I have not tested it with CSLA objects.

 

There is an obvious maintenance issue as the proxy and real objects are to have the same interface; personally, I am prepared to use some sort of code generation to mitigate the issue: depending on other elements of your design the resetting references might be a major pain.  

 

Any feedback will be appreciated.

 

Igor Leonov

 

msk replied on Tuesday, July 11, 2006

Thanks guys, it's interesting to see how other people have handled this issue.  I was still thinking about how to fix the symptom rather than tackling the root cause - serialization.  Rocky, writing an in place deserializer is certainly the ideal solution.  I once reverse engineered the vb5/6 propertybag to create a propertybag decoder as a debug tool for old CSLA so I can imagine how difficult that might be.  I still have to remind myself from time to time that the DataPortal works with any serializable object.  My initial and probably flawed suggestion was to put code in BusinessBase.Save.  I still think there is a possibility that this will work.  Admitedly it is not ideal, but I think it may be possible to write a reflection based object graph synchronizer.  I may attempt it when I have some time free.

Copyright (c) Marimer LLC