Search code examples
sql.netentity-frameworkconcurrencydeadlock

Optimistic concurrency across the board or just the table which is causing deadlocks?


I am working with a legacy EF system which is giving me some deadlocks on updates. It uses pessimistic concurrency. I want to move away from that and therefore I have allowed for the table which was updated to support optimistic concurrency. I am wondering whether I need to implement optimistic concurrency through all tables or just the ones which I believe is being deadlocked (the table which is being updated)? Any ideas are appreciated.

Regards, Ruskin


Solution

  • You did not state the RDBMS, nor did you state the quality of your data.

    Some RDBMS allow different stages of locking and you should really know what is happening behind the scenes. If you deal with sensible data (like I do) the data integrity must always stand before performance or annoying behaviour like a deadlock. This depends on the data you are storing. Sometimes one can accept the risk of small integrity breaks...

    I would never change from pessimistic locking to optimistic just to avoid deadlocks...

    You could run into mistakes without exceptions. Everything seems to run great, but some time later you find, that your data is broken...

    My advise: Use kind of a profiler to understand what is really happening. Most deadlocks have their reason in bad structures. Most deadlocks are easy to avoid if you change the process. But this - especially in connection with legacy software - might be difficult.

    Could be, that your legacy software calls code within your RDBMS (e.g. stored procedures) Could be, that your can change something there without changing the calling software...

    Good luck!