I am struggling with loading a structure made of 6 layers of data.
The load pattern is easy enough to code, but the process always errors our with a "transaction aborted" message. We calculate there's over 500K individual fetch requests being issued and the connection simply won't stay open long enough.
I've only faced two layers of data before and ended up constructing the database query that generates 2 return sets. The first set is a list of parent records with an index, the next is a set of child records with their parent index numbers attached. This works great for Parent & Child (We've seen load times drop from 4 minutes to 2 seconds!!!).
Currently I have the query modified to return all 6 layers as 6 return sets. Each record in each set has index numbers pointing to it's parent in the layer above it, But I'm beginning to realize that index numbers don't work past parent/child. From the grandchild layer on down, the index numbers assigned by the query no longer map to the collection index numbers in CSLA.
Anybody have any insight/ideas/experience on how to load the other 4 layers? Like I said before, this is my first time dealing with anything beyond 2 layers. If possible, I would like to keep it as one query because of the speed.
First what's the reason to load all data in a transaction? Transaction-less is usually faster.
But anyway this wouldn't speed it up so much.
I have similar problem aeras. One category is the type of "static" (or almost) data. For these the entiry list is fetched once and cached and then all factory methods run against the cache using Linq-to-object.
Now, for some remaining I haven't implemented a solution yet. But my plan is as follows:
If I had to fetch all parents I also fetch all children, and then assign the children to their parents. I expect doing this by their foreign key relationship (as above in memory with Linq-to-object).
Currently I have 3 or 4 levels. I can share my experiences with you probably this week.
You can use one query (if you can construct the query to return all 6 result sets). You just need to build your DAL code to do multi-pass loading of your object graph.
Pass 1 - load the root
Pass 2 - load the children
Pass 3 - for each child, load its children
Pass 4 - for each child, for each grandchild, load its children
I've done this before, and would only encourage it if performance of multiple database queries is an issue. The code isn't complex, but it is tedious.
What you say is very intriguing and I would like to know more.
I currently have the query that returns all 6 sets. I was trying to build the Structure one level at a time. (ie-load ALL the roots, then ALL the children of those roots, then ALL the grandchildren, etc), but if I understand you correctly, you are saying I should load a "family branch" at a time? (Load Root 1, children for Root 1, grandchildren for child 1 of root 1, etc, then go back and do the same for root #2?)
So that would mean you are jumping around in the datareader object alot or using LinQ to filter the applicable records?
There is a sample called Deep Data which demonstrated parent, child, grandchild that may assist you. This was there in the 3.8 version at least and if I recall correctly there was also a video where he discussed this on Dnrtv.
DeepData is fairly old, but does cover this, yes.
I am not suggesting jumping around in the datareader - you can't do that. The datareader only supports linear reads.
I am suggesting that you write code that reads the datareader in a linear fashion, and jumps around in the object graph to load the correct objects.
I just want to check, if I've understood the multi pass idea correctly.
Is it correct, that I first query the database and the result (in my case using DTOs) having the DTO for the root and a list of DTOs for every layer?
Then within for example the root ObjectFactory I do in multi passes find the children for each parent, load the DTO's data into a child BO and assign it to the parent BO?
One of our other developers suggested building the tree from the bottom up instead of top down.
Each record in each level has an index number to the parent directly above it, so we can iterate through level 6 children and add them to their level 5 parents. Then iterate level 5 and connect them to level 4, then 4 to 3, etc.. As we add children to parents, the structure under the child carries with it. After a child is added, it's index number becomes useless, but that's OK since it's now in it's proper parent container.
Because we're using multiple return sets and index numbers the whole process is VERY fast...
I don't understand the advantage of the bottom up approach.
I understand the "index number", that's usual if a foreign key relationship exist. But, if you add the children from level below (initially the bottom) to their parents, the parents need already to exist.
So, do you mean, you create (a list of) BOs for every return set and the add the child to its parent, and the CSLA parent/child meachnism stands for the foreign key relation of a database? Then, I just can't see any difference between adding top down and bottom up.
But I think if you can create return sets in the right order (indices), that's propably a speed up.
Is this a read-only structure?
Are the items all of the same type with respect to the 6 levels deep, and with each item there is some notion of "Parent ID"?
If so, I'd recommend personally just loading them all into a single collection, and have nonserializable private fields for parent item and child items (generic list) - on deserialization/fetch, have a method that basically swoops through and establishes the parent and child references. It makes things really super simple and the algorithm can be quite efficient (really linear order to the maximum depth of your tree).
No, they are all editable and beside an editable root, all levels below are editable children. On every level the children are of its BO.
Because I'm fetching existing data, all foreign key values are present and correct. And CSLA sets the Parent property correctly by adding a child object / collection.
My idea is to have locally in the Fetch method one collection per level and add children to their parent's collecetion level by level.
The question (to Rocky) has been, if this idea is really the same as his multi pass description above.
CslaGenFork is an OpenSource code generator. As of the latest public build, it can handle deep data scenarios. The DeepLoad sample tests a data structure 6 levels deep. It could have used a depth of just 4 levels - the extra 2 levels were added as a reaction to this thread.
Anyway the sample is available at http://cslagenfork.codeplex.com/releases/view/79758
Copyright (c) Marimer LLC