Handling business object when insert/update fails

Handling business object when insert/update fails

Old forum URL: forums.lhotka.net/forums/t/3252.aspx


skagen00 posted on Tuesday, July 24, 2007

Obviously a number of things can happen to cause an insert or update to fail; and when it does fail and we're using transactions, we know the DB will get rolled back.

But what about the business object?

Say one is assigning Id's, doing some manipulation of the business object upon saving in terms of simply wiping the IsDirty flags, etc. Our business object is no longer in a state that we'd like to continue using it.

One option I thought about (and implemented for a little while) is to update a cloned version of the business object. Thus if my save succeeds it'll return the updated cloned object which gets set to the "object = object.Save()" call, and if it fails I can catch the exception and have the original business object.

While that works it requires a Clone() call which isn't free by any means.

Has anyone thought about this situation and how do you handle it? One option is to take on the additional encumberance on each update/insert to do a clone and the other option is to say "tough luck, your changes failed and well, I can't give you back your business object".

If one assumes that failures of inserts and updates (concurrency and other types of failures) are pretty minimal (which I really feel they will be), then the latter isn't necessarily bad - one doesn't have to take on the extra overhead of the Clone() call.

Also, if the update/insert failed once, it's certainly probable that the user and/or object is in a state (such as an outdated instance of an object, etc) that the user wouldn't be able to do anything with the BO if the pre-save one was retained.

Thoughts and opinions? I lean towards not doing the Clone call but I was curious what the community did about this.

Thanks,

Chris

 

 

skagen00 replied on Tuesday, July 24, 2007

BTW this of course is applicable when using the SimpleDataPortal but not with remoting situations, where the business object going through the save is already a different version of the original object having been serialized/deserialized...

 

 

 

triplea replied on Tuesday, July 24, 2007

A while ago in one of my first test implementations I faced the same problems bacause I bypassed the Clone() method described here: http://www.lhotka.net/article.aspx?id=91e15def-fa1c-4236-86b5-b204bfc4a0aa
Indeed the problem I faced was update timestamps, FK ids etc and after lots of thinking and Googling I ditched this route in favour of the Clone() route described above. Just out of interest, is it that critical to preserve resources in your case that makes you want to stir away from this approach? Also remember (and this is described in the article above) that cloning the object is not happening just for the peristency but also because of issues with databinding...

skagen00 replied on Tuesday, July 24, 2007

Hi triplea, thanks for your reply.

It's not entirely critical to preserve resources if we take the "discard" route - we're developing a Web application and as such the size of the updates to an object will be on a smaller scale. That is, it's less likely that a user will find themselves in the situation of - "wow, I'm going to lose all my work? that sucks.". 

With the Web environment, the databinding is less of a concern to me but thank you for the article. 

I don't want to be totally blind to what would happen in a Windows UI however - where an object may be manipulated more thoroughly before being saved to the database and potentially there is more room for concurrency violations, etc.

I'm just trying to be cautious about casually doing a "Clone()" because it handles very edge cases more gracefully. If the failure rate of inserts/updates is low and the likelihood of being able recover from failures is also low (such as an obsolete BO due to concurrency) then the overhead of doing a Clone on every single update/insert is heavy!

Chris

 

triplea replied on Tuesday, July 24, 2007

Well unfortunately you will have to make a compromise (we all have to at some stage).
Indeed if you believe that concurrency, data access, <other> errors during insert/update/delete are not likely to happen and you wish to avoid cloning your objects, I guess you could always catch an exception on your parent object .Save() and in there reset such values as FK, timestamps and any other values. You could also go a step further and keep a record of dirty children prior to save and marking them again as dirty in the catch clause so that if the user hits save again only the correct objects get updated. So the good thing is that you don't clone and the bad is that you write additional code and tread away from a widely used pattern.
If you decide to start cloning then you do get hit by the overhead but from a maintenance (and programming) perspective you are more in the clear.

Your choice :-)

ajj3085 replied on Tuesday, July 24, 2007

the clone method is what Rocky recommends.  If the update fails, your original object is left unchanged by anything the DP methods may have done.  You can then just throw out the clone.  If the save succeeds, you simply update your application to use the new object.

skagen00 replied on Tuesday, July 24, 2007

Here's what I don't like, though:

I ran a test of 10, 100, and 1000 clones of a certain business object and I ended up getting an average clone time of 100ms. That's not cheap and at some point there's a point where the side of the teeter-totter weighs more heavily in favor of a performance minded approach.

Especially if one considers that in many circumstances (like concurrency) the business object - if retained - is useless anyways! (i.e. it was stale the first time and it'll be stale any subsequent time!)

skagen00 replied on Tuesday, July 24, 2007

I should add something...

I can probably drop the clone time to about 50ms but the primary thing is this: with a Web app when processing occurs on a server rather than an individual workstation (which can afford the expense!) it's a cost and the benefit had better be there. I just don't see it as being there.

I can see it being a non-issue in WinForms, where the clone would occur on a client's machine. And for databinding reasons as well, it is perhaps more useful as triplea mentioned in WinForms.

 

 

ajj3085 replied on Tuesday, July 24, 2007

That's true as well... but my response a few moments ago still holds true in a web scenario I would think (about users just having their changes tossed).

skagen00 replied on Tuesday, July 24, 2007

Users would be provided with a means to reload the business object, yes.

I would probably be entirely OK doing the clone in a WinForms app. Users won't notice 100ms. But you have enough traffic on a Website and it's not so much how a user has to deal with a 100ms wait but rather how all users have to deal with a 100ms wait.

I think in terms of Web, it *is* important to look ahead to keeping things optimized. CPU speed and memory both become more "shared" resources rather than user specific in terms of a WinForms application. I'd argue one can approach this in a more lax fashion in WinForms than WebForms.

Anyways, thank you for your input.

Chris

 

 

ajj3085 replied on Tuesday, July 24, 2007

Well first off, I would say stick with Rocky's point on Premature Optimization.  So don't worry about performance unless it becomes an issue.  Will you users notice an extra 100ms when the app is updating the database?  Likely not.

If you have a  concurrency failure, I doubt your users will like a messages saying "sync failure" and then having their changes instantly lost.  You could at least load a new BO up and copy their changes over (or offer to show the new / conflicting values). 

If concurrency does become a large problem in your application, its probably better to implement a locking mechism; at least the users won't waste time editing a record which is currently locked.

Copyright (c) Marimer LLC