Our SQL Server 2014 database is set to READ_COMMITTED_SNAPSHOT
.
We use MSMQ and distributed transactions (we use MassTransit 2.10)
In one part of our system we read a message from the queue, make database updates and then publish a new message to the queue (all under a single transaction).
We have found a situation where it seems that the update are not committed when the next message is processed (it reads from the same table the first part updates) even though I would expect that message to only be on the queue at the same time the database is updated. When we query the table later the updated data is there as expected. This only seems to happen when we have high load and very rarely.
Simplified version of our code
// code that processes message 1
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions() { Timeout = TimeSpan.FromMinutes(30), IsolationLevel = IsolationLevel.ReadCommitted })
{
MethodThatUpdatesTableX();
MethodThatCreatesMessage2();
scope.Complete();
}
// message picked up from MSMQ and then (this runs in different thread):
// code that process message 2
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions() { Timeout = TimeSpan.FromMinutes(30), IsolationLevel = IsolationLevel.ReadCommitted })
{
MethodThatReadsFromTableX(); // so here it seems that changes made in MethodThatUpdatesTableX is sometimes (though rarely) not read
// other stuff
}
So here is my understanding:
When the scope is disposed the changes to table X is committed as well as the message published to the queue
When MethodThatReadsFromTableX()
reads from table X I would expect the changes to be there (the session should not be created before the first one is completed because it wouldn’t be able to pick up the message from the queue)
Is my expectation correct? What could the issue be?
I'll give you a really, really short answer.
Use Serializable
as your isolation level. It's the only way to guarantee that readers will be blocked by writers.
The rest...
SQL is very good at row-level locking, unless you are inserting sequentially into a clustered index. In that case, there are performance benefits to inserting and handling the duplicate key error -- an order of magnitude benefit.
You also need to make sure that your readers that might be getting dirty reads are also reading using serializable -- otherwise, you're going to have a case where a published message is being consumed before the database transaction is committed. I've seen this in high volume production systems, and it happens (of course, with RabbitMQ not MSMQ - RabbitMQ dispatches much quicker than MSMQ).