Help! SQL Deadlock - what's the best approach? Help!

Help! SQL Deadlock - what's the best approach? Help!

Old forum URL: forums.lhotka.net/forums/t/1339.aspx


GaltSalt posted on Wednesday, September 27, 2006

Hi!

In our system we tested 10 users hitting a function at exactly the same time. This function updates many tables, over many business objects, over a single transaction.

 The sql exception says to re-run the transaction.

 

So why not do this?

But!

 

Should this be done in each of the business objects’ DataPortal_Update call – or in the CSLA library CSLA.Server.DataPortal project’s DataPortal.vb Update shared function?

 

Or did I miss something - should there be a retry parameter on the SQL DataAdapter?

 

Any ideas?

Hasn't anyone come across deadlocks before? I see no discussion anywhere on it actually dealing with deadlocks. I see some people saying that they're worried about them - but no real discussion.

 

Help!

 

ajj3085 replied on Wednesday, September 27, 2006

Yikes.  There are ways to handle this, but you'll have to search outside this group I think.

I'm suprised though; for the dead lock to occur, the function must lock a table by doing somethign with it, but the second call must be locking a second table before the first...

GaltSalt replied on Wednesday, September 27, 2006

It's not that - it's all ten users trying to update table A at the same time. I've used row locks on the stored procedure (which made it much better because it used to be 6/10 users hitting deadlock). Now I only get one user chosen as deadlock victim. The calls are all identical. there is no different order of operation between users. But I will investigate any subquery updates because there is a large tree of data being updated in one transaction. My quandry is - why am I getting a deadlock at all??? Each user has their own set of records and their own unique key (guid) to access their data in the updates - all with rowlock attributes....

arg!

figuerres replied on Wednesday, September 27, 2006

GaltSalt:

It's not that - it's all ten users trying to update table A at the same time. I've used row locks on the stored procedure (which made it much better because it used to be 6/10 users hitting deadlock). Now I only get one user chosen as deadlock victim. The calls are all identical. there is no different order of operation between users. But I will investigate any subquery updates because there is a large tree of data being updated in one transaction. My quandry is - why am I getting a deadlock at all??? Each user has their own set of records and their own unique key (guid) to access their data in the updates - all with rowlock attributes....

arg!

Update or Insert ?

how is the table indexed / keyed ?

one possible thing that may help is this:

I have a system with a lot of users adding new rows that had locking problems.

we saw failed transactions and timeouts from web service calls and huge amounts of ram in use on the server.

I chnaged a few sql server settings but what I think really did the most was to put a clusted index onthe key tables that was not in "row insert order"

each table in MS SQL can have an index with clustering. by default it will often be the PK of the table.

if you locate other data in the table that you can use to build an index that "Scatters" the data around it will (if clusted) force the server to move the data blocks around to match the b-tree of the index.

in my case I used a user ID int plus several other items to move the data and index pages around.

this plus a job that re-builds the index each night.

has made it so that around 50-60 users can all insert rows at one time and never lock each other out of the system. no more failed inserts.

this does grow the table and index a bit, it does use more sql ram then a nice ID key would do.

but it was amazing, the boss was asking if we needed to buy ram, a new server etc...

I had him give me a few hours to look at the problem and BAM! when he saw the server at the next peak load his jaw dropped and he was stunned.

before it was like CPU pegged at 99% and ram maxed out to after was CPU at 10% and ram less than 1 gig. (3 gig server ram and dual core ht cpu)

we are still running the same db / server and it's amlost a year later.  works great.

I think I also set the SQL 2005 option READ_COMMITED_SNAPSHOT on

this reduces locks by using a memory chain and using row versioning in the SQL engine.

 

all this said, 10 users should not really lock up a transaction. you may first want to look good and hard at your layout and logic.

I would suspect that a good review may find a way to move that to more like 50 - 100 users before a lockup.  we did not have our problem till we had at least 40-60 users on line at one time all processing orders. when we tested with 1-10 users the problem was not seen.

use sql tools to get a total time for 1 transaction, get some info on estimated sql execution plans.

how long does it take to do the update or insert?

do you submit them all in one sql batch / stored proc or one at a time with system.transactions ??

I'd make one sql proc with all the update/insert logic in one proc with one transaction if at all possible.

or gather up all the data and have a class that runs them all in one method inside one block / try /using constuct.

 

 

 

GaltSalt replied on Wednesday, September 27, 2006

Thanks so much - you've given me much to chew on...so it will take me a bit to respond.

This transaction is very large - I have to get an idea how long it takes sql to complete this task. That's just one of the steps I have have to do to understand what is really wrong. The SQL Server (both a clustered and non-clustered SQL environment) is still sleeping throughout all of this at 5% utilization.

I believe this to be an insert issue - but even so - I have to break this down to see what's up. It could be a combination of inserts and updates - we have a very large object and children saving - some components of the object may already exist when writing a new base object as it's a 'mash-up', if you will, of many data components.

 

Lakusha replied on Wednesday, September 27, 2006

You can't have a deadlock only on inserts (unless its a ressource deadlock, but that can't be your case).

Short tips:

1) transactions must be SHORT. i.e. milliseconds.

2) prepare your data, open the tran, write and commit. Don't start mixing reads & writes in the same transaction

3) always write in the same order (T1 -> T2, etc.)

4) Make sure all your tables have a clustered index. Usually, a clustered PK is a good rule.

5) make sure all your important queries are making use of your indexes. If you read inside of a transaction and it causes a table scan, the of course you will deadlock.

 

hope it helps

 

L

Lakusha replied on Wednesday, September 27, 2006

PS: if your are running SQL 2005, just use Profiler. It will catch the deadlock chain and even display a nice graph of the 2 colliding statements (and exactly with object is at the center of it all: index, etc.)

Copyright (c) Marimer LLC