CSLA 3.6.X Concurrency Best Practices.

CSLA 3.6.X Concurrency Best Practices.

Old forum URL: forums.lhotka.net/forums/t/7015.aspx


bniemyjski posted on Thursday, May 28, 2009

Hello,

Rocky, I have been looking through your new book as well as the forums here and here, and I haven't come across anything up-to-date on dealing with concurrency. The only document I can find on this issue is located here. What do you recommend? Do you have any samples?

Thanks
-Blake Niemyjski

RockfordLhotka replied on Thursday, May 28, 2009

The concurrency document in your third link is correct. CSLA doesn't do anything around concurrency, specifically so you can choose a model that works well in your database environment. Data concurrency is a database issue, not an object issue.

I use timestamps as a general rule, because they are the most efficient and safe technique, especially in a distributed environment.

You really only have two options - use a timestamp, or maintain a copy of the original values for all your fields.

That second approach is powerful, and you can implement that by creating custom PropertyInfo<T> and FieldData<T> subclasses (in 3.6 or higher). But you must remember that you just doubled the size of your object in memory, and more importantly over the wire. So while it is a powerful technique, it is not cheap.

And that's why I prefer the timestamp model. With SQL Server, this adds just 8 bytes to each of your objects (that's the size of the SQL timestamp), and you just ferry it around in a private field so the value is accessible to your DAL when it does an update operation.

The Resource class in ProjectTracker uses the timestamp technique, and provides a good example for using it with SQL Server.

bniemyjski replied on Thursday, May 28, 2009

Hello,

Thanks for your suggestion. I wish you would of had something in your book on this. Even if it isn't a CSLA issue, your recommendations are always welcoming :).

Thanks
-Blake Niemyjski

rsbaker0 replied on Thursday, May 28, 2009

RockfordLhotka:

...

That second approach is powerful, and you can implement that by creating custom PropertyInfo<T> and FieldData<T> subclasses (in 3.6 or higher). But you must remember that you just doubled the size of your object in memory, and more importantly over the wire. So while it is a powerful technique, it is not cheap.

...

I use a modified approach in which I cache the original value of a property only when it is being changed (basically using a structure like your Undo state).

So, while it requires double the storage in the worst case (perhaps actually even slightly more), in practice it is typically much less.

The concurrency cache also comes in very handy for determining if a field is *really* dirty (e.g. if you change a value a second time and put it back to the original value, then it's not dirty anymore).

bniemyjski replied on Tuesday, June 09, 2009

Hello,

Do you have an example of this?

Thanks
-Blake Niemyjski

b30.868 replied on Tuesday, June 30, 2009

I actually implemented the CRC method by calling "LoadProperty UInt32>(Crc32Property, Function.GetChecksum BBClass>(this));" in the fetch method (as well as the insert and update).

Using a binary formatter, I serialize the class into a buffer and generate a checksum from the buffer.

However, I've run into an interesting dilema.

If I add a validation rule to the BB class, the buffer length is 199 larger only on the first call to Get(), in every other call the buffer length is 199 less.

With no validation rules being added, buffer length is same on every call to Get().

The initial difference in the buffer lengths causes only the first checksum to be different than all subsequent checksums for the same object.

BBClass bbClass = BBClass.Get(1); //Buffer length = 199+n
bbClass = BBClass.Get(1); // Buffer length = n
bbClass = BBClass.Get(1); // Buffer length = n
BBClass bbClass2 = BBClass.Get(1); //Buffer length = n

Does anyone have any insight to my dilema?

Thanks.

-Alan Gamboa
CCG Systems, Inc.

note:

Regardless of the class size or number of validation rules, the first call is always 199 larger.

RockfordLhotka replied on Wednesday, July 01, 2009

You are checking the serialized object byte stream? Interesting idea.

 

Remember that the serialized object includes quite a number of things that are not directly under your control, including IsNew, IsDirty – and to your question – BrokenRules.

 

Some of the sub-objects (like the broken rules collection) are created on-demand to keep the size of the byte stream as small as possible. No sense creating an empty object just to serialize it over the wire. And I think what you are seeing is one of those on-demand objects being created.

 

Rocky

b30.868 replied on Wednesday, July 01, 2009

Thanks for your insight, that explanation makes sense.

-Alan

rfcdejong replied on Wednesday, July 01, 2009

You could also calculate an checksum by using MD5CryptoServiceProvider().ComputeHash(bytes) where bytes would be an byte array which contains data based on property values.

So calculate the checksum when fetching from the database.. before u save to the database u'll have to fetch it again and calculate it.

If u think there might be a performance issue to compute a hash, don't worry.
Fetching before saving is something else, but if u want concurrency then i think u'll need to do that.

b30.868 replied on Wednesday, July 01, 2009

rfcdejong:

You could also calculate an checksum by using MD5CryptoServiceProvider().ComputeHash(bytes) where bytes would be an byte array which contains data based on property values.


So calculate the checksum when fetching from the database.. before u save to the database u'll have to fetch it again and calculate it.


If u think there might be a performance issue to compute a hash, don't worry.
Fetching before saving is something else, but if u want concurrency then i think u'll need to do that.



Any suggestions on how to dynamically create the byte array of data?

Thanks.

-Alan

Wbmstrmjb replied on Wednesday, July 01, 2009

There is another option for concurrency which we use that is similar to the timestamp idea. We have a global sequence (Oracle) that it used to add a Revision number to the record. All records get Revision = 0 when created and then any future update uses the global sequence to get the next number in the sequence. Every Revision > 0 is thus unique and used in determining if an error exists.

If Party A reads a record with Revision 0 and then Party B also reads that same record, both objects have 0 for their Revision. When PartyA saves (updates), Revision will be set to next number (say 4568). When Party B tries to save, his Revison of 0 doesn't match the current Revision of 4568 so an error is fired off from the DB to let the BO know.

Similar to timestamp, but just a number.

b30.868 replied on Wednesday, July 01, 2009

I don't know why I asked that :)

Just use FieldDataManager.

Thanks for your input.

-Alan

bniemyjski replied on Wednesday, September 15, 2010

Hello,

Wanted to update this post and let everyone know that I did figure this out and we have concurrency support in the CodeSmith CSLA templates for a few versions now. 

Thanks

-Blake Niemyjski

Copyright (c) Marimer LLC