I've an IIS module that services multiple requests at the same time. At the entry of each request I want to log the last time the service has been accessed:
update log set last_access = getdate()
But with the level of concurrency that some times I can have, I'm afraid that I can easily get conflicts.
As my log doesn't need a precision of milliseconds (it's going to be consulted to check if the server has been accessed for the last X minutes, not milliseconds), then if my logging faces a conflict it can perfectly ignore that log and just do nothing.
My initial though was to protect my logging within a try-catch block and just ignore the conflicts.
begin try
update log set last_access = getdate()
end try
begin catch
-- We do nothing and ignore that another request has logged an access at the same time
end catch
But is this correct ?. In case of conflict will automatically jump to the catch section, do nothing, so my Backend can continue normally servicing the rest of the request ?.
I'm afraid it could wait for the deadlock to be released and just continue after a timeout. Is there a better way to update a value, while telling SQL Server to just ignore that update if another session has already done it at the same time ?.
PS: this update log will run on its own transaction, while any SQL operation needed for the servicing of the request will run on separate transactions.
Thank you.
But with the level of concurrency that some times I can have, I'm afraid that I can easily get conflicts.
As my log doesn't need a precision of milliseconds
Given those two things together, I believe you're overthinking this and don't really need to worry about it.
It is indeed possible to get a concurrency or race condition situation such that a later access might manage to push it's update through before the earlier access. However, we're talking at most a few milliseconds here one way or the other. Since that level of precision doesn't matter, you'll be okay.
The other concern is locking or blocking caused by multiple updates at the same time. But again, it looks like you're over-thinking this. Relational databases are generally good at handling this situation within the database. They'll lock appropriately to make sure both updates do happen, and in an atomic and performant way, where the later update wins to set the final value for a given row/column. This kind of thing is one of the big reasons relational databases were created in the first place.
It is possible to create deadlocks, where eg transaction A is waiting on a lock from transaction B which is waiting on a lock from transaction A, and they're stuck forever. Longer, more complicated chains/rings are possible as well. But that generally involves transactions that lock more than one thing, and in different orders. That isn't happening here.
The one thing I might do is make sure you have a WHERE
clause, even if there's only one row in the table.
Looking back at the comments posted since I start writing the original answer, I see multiple rows are in play after all. That does create a (tiny!) potential for deadlocks. There are four things you can do to help. Pick any ONE of them and it will guarantee you avoid deadlocks as long as it's done consistently.
getdate()
value,