The concurrency document in your third link is correct. CSLA doesn't do anything around concurrency, specifically so you can choose a model that works well in your database environment. Data concurrency is a database issue, not an object issue.
I use timestamps as a general rule, because they are the most efficient and safe technique, especially in a distributed environment.
You really only have two options - use a timestamp, or maintain a copy of the original values for all your fields.
That second approach is powerful, and you can implement that by creating custom PropertyInfo<T> and FieldData<T> subclasses (in 3.6 or higher). But you must remember that you just doubled the size of your object in memory, and more importantly over the wire. So while it is a powerful technique, it is not cheap.
And that's why I prefer the timestamp model. With SQL Server, this adds just 8 bytes to each of your objects (that's the size of the SQL timestamp), and you just ferry it around in a private field so the value is accessible to your DAL when it does an update operation.
The Resource class in ProjectTracker uses the timestamp technique, and provides a good example for using it with SQL Server.
RockfordLhotka:...
That second approach is powerful, and you can implement that by creating custom PropertyInfo<T> and FieldData<T> subclasses (in 3.6 or higher). But you must remember that you just doubled the size of your object in memory, and more importantly over the wire. So while it is a powerful technique, it is not cheap.
...
I use a modified approach in which I cache the original value of a property only when it is being changed (basically using a structure like your Undo state).
So, while it requires double the storage in the worst case (perhaps actually even slightly more), in practice it is typically much less.
The concurrency cache also comes in very handy for determining if a field is *really* dirty (e.g. if you change a value a second time and put it back to the original value, then it's not dirty anymore).
You are checking the serialized object byte stream? Interesting
idea.
Remember that the serialized object includes quite a number of
things that are not directly under your control, including IsNew, IsDirty –
and to your question – BrokenRules.
Some of the sub-objects (like the broken rules collection) are
created on-demand to keep the size of the byte stream as small as possible. No
sense creating an empty object just to serialize it over the wire. And I think
what you are seeing is one of those on-demand objects being created.
Rocky
You could also calculate an checksum by using MD5CryptoServiceProvider().ComputeHash(bytes) where bytes would be an byte array which contains data based on property values.
So calculate the checksum when fetching from the database.. before u save to the database u'll have to fetch it again and calculate it.
If u think there might be a performance issue to compute a hash, don't worry.
Fetching before saving is something else, but if u want concurrency then i think u'll need to do that.
rfcdejong:You could also calculate an checksum by using MD5CryptoServiceProvider().ComputeHash(bytes) where bytes would be an byte array which contains data based on property values.
So calculate the checksum when fetching from the database.. before u save to the database u'll have to fetch it again and calculate it.
If u think there might be a performance issue to compute a hash, don't worry.
Fetching before saving is something else, but if u want concurrency then i think u'll need to do that.
Hello,
Wanted to update this post and let everyone know that I did figure this out and we have concurrency support in the CodeSmith CSLA templates for a few versions now.
Thanks
-Blake Niemyjski
Copyright (c) Marimer LLC