Optimistic concurrency

Optimistic concurrency

Old forum URL: forums.lhotka.net/forums/t/194.aspx


Luke posted on Wednesday, May 24, 2006

I have a question about concurrency. Our current system uses optimistic concurrency, last person to write wins using field comparison. I need to apply the same technique in CSLA. I pretty much know that I can create a second field and marked it as NotUndoable to hold the original value. but I really don’t want to have to maintain a second field. What I would like to do and this is where I am looking for thoughts from other users on how they have handled concurrency in CSLA, is to maintain a list of fields/values, in the similar way that the list is created in the CopyState method in the Undo functionality, that would be stored in a variable marked NotUndoable. When the data is saved I would call a method that would compare the current field values on the BO against the values stored in the original list that was created on the initial lookup and if the value has changed add it to a list that would be return by the method. I then could use that list to construct my update statement, thus saving only the fields that changed.

Thanks

xal replied on Wednesday, May 24, 2006

Luke,
Take a look at active objects. It has a dal that handles concurrency for you and makes your dataportal methods nearly unexistant.
Note that I use Active Objects a lot, but I haven't used this particular feature, I just know it's there. Anyway, knowing Petar, I'm sure it works like a charm.
http://csla.kozul.info

Andrés

hurcane replied on Wednesday, May 24, 2006

We're implementing optimistic concurrency by storing the values from the database in a "smart data" object. The smart data objects are easy to use when loading data from the database and when sending updates back to the database. I just posted an overview of how we do it. You can find the posting on my blog.

phowatt replied on Thursday, May 25, 2006

The approach I use puts the logic in the stored procedure.  First of all the variables I us for each of the colulmn values are in a separate class much like the ReadOnlyBase lists work.  In my DataPortal_Fetch method after populating these variables from the resutl set I create a clone of this data class which gives me two copies of the original data.  In the DataPortal_Update method I create a list of parameters for each column and then I add to it a list from the cloned class and add the prefix 'Orig' to each column name.  In the stored procedure the first thing I do is read the corresponding row from the table to get the current values.  Then for each column I compare my value to the current value and to the original value.  If no one has change the value I use my latest version for the colum.  If someone else has changed the value but I did not I use the current value for the column.  If I have changed the value and someone else has also changed the value I use my value but then add an entry in a log file show who the user was who changed it before me and their value for the column and then I add another entry with my userid and the value that I used for the column.  This log table can be reviewed to determine if there should be a correction to the data. 

DansDreams replied on Thursday, May 25, 2006

phowatt, do you wrap that whole thing in a transaction with an isolation level specified as higher than the default.  If not, aren't you creating a potential mess if someone were to change the database data between the time you read it and update it? 

If you didn't want to use a blocking transation then you could use the rowversion technique within the sproc and check for that condition explicitly, working on the presumption that the chances of having two writes happen that close together is fairly miniscule.

phowatt replied on Thursday, May 25, 2006

Yes I do wrap this logic in a transaction.

esteban404 replied on Thursday, May 25, 2006

I've used an interface for lock/unlocking of the record marking it for exclusive editing by authorized users, while allowing readonly access for others. I have the unlock routine in the object's dispose method:

public void Dispose()
{
if(this.IsLocked)
{
using (SqlConnection cn = new SqlConnection(DB("ENS_INFO")))
{
cn.Open();
using(SqlCommand cm = cn.CreateCommand())
{
cm.CommandType = CommandType.StoredProcedure;
cm.CommandText = "unlockRedZone";
cm.Parameters.Add("@RedZoneID", _RedZoneID);
cm.ExecuteNonQuery();
}
}
WriteHistory(
this,Original); // part of histrory writing interface
}
}

The only time I've seen trouble matches the FEA I did on the design: user's machine looses power or network drops out, then the record will not be unlocked for others to use. If the user hasn't lost the object, it'll save OK, but will need to reacquire the lock. It's a cludge, but it's working.

_E

ajj3085 replied on Thursday, May 25, 2006

We had a locking mechinism like this once.

We thought about the issue if the user drops off (which was more of a concern for us sicne it was a web app and we couldn't stop the user from just closing the browser).

We came up with the idea that locks timeout after 20 minutes.  If the user dropped and them came back, the locks timestamp was refreshed so they could finish.  If they didn't come back, the first one to try for a lock after it expired would automatically be given the lock.  The original user is SOL at that point.. but we couldn't let locks stop others from working.

Andy

esteban404 replied on Thursday, May 25, 2006

After replying, I went back to see what was going on with the SQL side. I don't remember changing it, but I see I've created a routine to check for the lock (in the object) which allows the save. Then the *sproc* checks to see if the lock is expired but they own it. If so, they are allowed to save. If not, it's rejected. I just need to modify the object to raise the rejection exception and handle it politely.

I put in a big TODO for this. Good thing it's not released. <s>

_E

Copyright (c) Marimer LLC