Poor perform in Save - cloneable code

Poor perform in Save - cloneable code

Old forum URL: forums.lhotka.net/forums/t/8892.aspx

Nemisis posted on Thursday, May 06, 2010

Hi everyone, i have an object inherited from BusinessBase, within that object i have a BusinessListBase item, that has 30 BusinessBase items in it. Within those BusinessBase items are some properties and another BusinessListBase with up to 30 BusinessBase Items within it.

The perform on saving this item seems very slow, and when i looked at the code it appears to be the following code that is slowing the save procudure down.

DataPortal.cs - line 450 within the Update method is the following code


          if (!proxy.IsServerRemote && ApplicationContext.AutoCloneOnUpdate)


            // when using local data portal, automatically

            // clone original object before saving

            ICloneable cloneable = obj as ICloneable;

            if (cloneable != null)

              obj = cloneable.Clone();


          result = proxy.Update(obj, dpContext);



The code in bold seems to take an awfully long time??  Is there anyway to speed this up??

RockfordLhotka replied on Thursday, May 06, 2010

The clone process uses native .NET serialization (unless you are on Silverlight), and there's not a lot you can do to speed it up. The only real thing you can do is make sure you don't have any circular references within your object graph - and if you do then break the circle by using the NonSerialized attribute. That will have a small impact by making the serialization more efficient.

The other alternative is to turn off AutoCloneOnUpdate. In that case your UI will need to have extra code to detect that a failure occurred during the save process, and it will need to discard the business object, because it will be in an indeterminate (invalid) state in memory.

RockfordLhotka replied on Monday, May 10, 2010

The serialization is pure .NET, so I don't know of anything you can do to optimize it.

There really is something different about your object graph from a typical object graph, because the performance problems you are seeing are not typical. Maybe your objects maintain a reference to a large image or massive strings or some other memory intensive data?

As I said earlier:



The other alternative is to turn off AutoCloneOnUpdate. In that case your UI will need to have extra code to detect that a failure occurred during the save process, and it will need to discard the business object, because it will be in an indeterminate (invalid) state in memory.


Nemisis replied on Monday, May 10, 2010

Rocky,  I dont really want to turn off the AutoCloneOnUpdate as that would make life difficult for us.

I have looked into this further and it appears to be the list property that takes so long to serialize.  If i dont load the data into this property, serialize is fine.

With this in mind, i have created an example project for you to look at using some basic code attached. I am using a basic parent object, with a list property and the list object itself.

TemplateItem - Parent
MeasureCommentAnswerList - List within Template 
MeasureCommentAnswerItem - Item within the List

I have then created 2 further variations of this, both keeping the same Parent object.

Version 2
TemplateWithListItem - Parent but with a List Of property instead of MeasureCommentAnswerList 
List (Of MeasureCommentAnswerItem) - Uses List Of instead of Csla BusinessListBase
MeasureCommentAnswerItem - Same as above

Version 3
TemplateWithListItemAndNoCslaChild - Parent but with List Of property and uses MeasureCommentAnswerNotCsla instead of MeasureCommentAnswerItem
List Of MeasureCommentAnswerNotCsla - Uses List Of instead of csla businessListbase
MeasureCommentAnswerNotCsla - doesnt inherit from csla at all, just basic coded class 

The example is windows forms and by pressing each button you can see how long each method takes to run. 

I understand that csla may take slightly longer, but the results for each appear very different. 

rsbaker0 replied on Monday, May 10, 2010

Wow. I've downloaded your sample and confirm the performance anomaly. Even on my reasonably fast machine, it's taking 7 seconds to Clone the object. I don't see an obvious cause yet though.

rsbaker0 replied on Thursday, May 06, 2010

I'm curious, have you done some actual benchmarking as to how long the clone operation is taking? (I guess I'm asking specially how long the Clone is taking?)

I am asking only because a clone would (in theory) generally be slightly faster than sending the object over a remote data portal, since that involves the same steps and additionally there is the time to transmit the data over the network.

Since I can fetch thousands of objects per second over a remote portal, it seems odd for the Clone() operation to be taking an especially long time unless there is something unusual going on.

Nemisis replied on Friday, May 07, 2010

I have timed the clone operation and it takes 14 seconds to run!!  Surely this isnt right???

Here is a breakdown of the actual object in question and what is contains, it is a fairly large object as you will see, but works well.  Retrieving and populating takes less then a second.

BusinessBase - Contains 8 int properties and a BusinessListBase called SectionList

SectionList - Contains approx 30 items called Section, which is a BusinessBase

Section - Contains 2 string properties and a BusinessListBase called measureList

MeasureList - Contains approx 40 items called Measure, which is a BusinessBase

Measure - Contains 30 properties and 3 BusinessListBase, called AnswerList, PromptList and CommentList

AnswerList - Contains 4 BusinessBase items with 5 properties

PromptList - Contains BusinessBase items, but this list is empty in this scenario

CommentList - Contains 3 BusinessBase items called Comment

Comment - contains 3 properties and 2 BusinessListBase called Answers and Prompts

Answers - contains businessbase items but is currently empty

Prompts - contains businessbase items but is currently empty


RockfordLhotka replied on Friday, May 07, 2010

That doesn't seem right - something else is going on.

Some things to check:

One way or another, you have some code that is running during serialization or deserialization, and that's probably what is causing the perf issue.

Nemisis replied on Saturday, May 08, 2010

I have not implemented ISerializable or have i overridden OnDeserialized.

Nemisis replied on Sunday, May 09, 2010

I have checked and i dont believe anything else is running.  I am debugging via vs 2008 and nothing seems to be running?  Have you got a good method for checking?

I have checked the stream length and its 16685005, should it still cause a slow serialize/deserialize when the object is this big?

Nemisis replied on Monday, May 10, 2010

Rocky, I have downloaded the red-gate profiler to see if anything else is going on.  I have taken some screen shots and attached them here, and i do believe it is the serialize and deserialize methods.

The object is 16mb in size, as i took the length from the memorystream once read.  

Do you have any other suggestions?  I wonder if it would be worth implementing the ISerializable interface?

RockfordLhotka replied on Monday, May 10, 2010

The sample is using an object graph with over 20,000 business objects, each of which has various support objects (like brokenrules) attached to it. It doesn't surprise me that this takes 7 seconds to serialize/deserialize.

The initial post was talking about an object graph of more like 900 business objects. That shouldn't take nearly so long, but the initial post is talking about that graph taking many seconds (24?) to clone.

If your user really is interacting with massive lists with thousands of items you should look at the Csla.DiffGram sample, which shows how to save only those items that actually have changes (which is usually a tiny subset of the graph in such a scenario).

But if you are seeing 24 second Clone times with 900 objects in a graph then we're back to something else going on. If I change the sample to create a graph of 900 objects it takes 270 ms to do the clone, which is pretty realistic.

Nemisis replied on Tuesday, May 11, 2010

The object graph i sent you was using our actual live code, which is too big to post so i thought i would make a simple example using what i think is the problem.

The simple example is to show that  the save/serialize methods take very long on csla businesslistbase or businessbase (not sure which one)

I have changed the example to load objects that are NOT DIRTY and the save operation takes 121 milliseconds the other two example take 0-1 milliseconds.

I have included another button to make 5 items in the list dirty (for each object), now look at the save (more importantly the serialize) method, it takes far longer.  Why?? 

If you load and make 5 items in the list dirty and 1 item in the parent, the first example takes 5613 milliseconds to run? The other 2 take 4393 and 497.

I still think something isnt quite right??  I think the save method is fine, but it is the cloning/serialization of the objects.  This happens on line 450 in the DataPortal.cs file

Nemisis replied on Wednesday, May 12, 2010

Can anyone confirm what i am seeing??  Am i doing something wrong, or can someone explain this?

RockfordLhotka replied on Wednesday, May 12, 2010

Saving 20k objects will take some time. End of story. To avoid that, use the DiffGram concept from the Samples to only update the changed objects.

But your initial post wasn't saving 20k objects, it was saving around 900, and that was taking far longer than this sample takes to save 20k objects. Right?

Do you want me to help you identify why saving 20k objects is slow? You've already done that: it takes the BinaryFormatter a long time to serialize/deserialize a 20k object graph. If you want that to be faster you need to avoid cloning the object graph - I can't change the way .NET serializes things.

Turning off autoclone can get you there, but with undesirable side-effects.

Using the DiffGram concept can get you there even better, but it has some side-effects too since the server-side code doesn't have access to the full object graph.

But if you are really pulling back and then saving 20k objects, the DiffGram approach is the solution I'd recommend.

None of which means anything relative to the original question about 900 objects.

Copyright (c) Marimer LLC