Search code examples
c#.netmultithreadingdeadlockrace-condition

How does a race condition occur?


I have a single item pool implementation:

public class OneItemPool<T>
{
    private T? _item;
    private object _lockForChangeValue = new();
    private object _lockForPostItem = new();
    private object _lockForGetItem = new();

    public void PostItem(T item)
    {
        lock (_lockForPostItem)
        {
            // waiting for _item to be taken
            while (WeHaveItem())
            {
            }
            
            lock (_lockForChangeValue)
            {
                _item = item;
            }
        }
    }

    public T GetItem()
    {
        lock (_lockForGetItem)
        {
            // waiting for _item to appear
            while (!WeHaveItem())
            {
            }
            
            lock (_lockForChangeValue)
            {
                var item = _item;
                _item = default(T);
                return item;
            }
        }
    }
    
    bool WeHaveItem()
    {
        return !object.ReferenceEquals(_item, default(T));
    }
}

As you can see I don't have a lock inside bool WeHaveItem(), I know object.ReferenceEquals is not an atomic operation.

I know this can happen:

  • Thread A calling WeHaveItem() and sees that _item is null
  • Meanwhile Thread B is changing value of _item
  • Thread A continues execution under assumption that _item is null while its not true anymore.

Suppose _item is not null and Thread A is calling PostItem, then check WeHaveItem and wait for it to become null
while Thread B calls GetItem and changes _value to null
Thread A will get the old result _item != null
but this shouldn't affect Thread A at all, because Thread A next operation will be to recheck WeHaveItem

I think there won't be any problems. But real tests show that this is not the case.

I set up my test like this

var pool = new OneItemPool<string>();
while (true)
{
    var threads = new List<Thread>();
    for (int i = 0; i < 3; i++)
    {
        var k = i;
        var thr1 = new Thread(() =>
        {
            pool.PostItem(k.ToString());
            pool.GetItem();
        });
        threads.Add(thr1);
    }


    foreach (var thread in threads)
    {
        thread.Start();
    } 
    foreach (var thread in threads)
    {
        thread.Join();
    }
}

One PostItem for one GetItem, but the test ends up deadlocking inside the PostItem and the _item value is not null.
This shows that two GetItem executed successfully for one PostItem.

If I add lock inside WeHaveItem the deadlock goes away.

I've been thinking about this for two days and can't imagine how this is possible.


Solution

  • I can't see how you could get a deadlock in the code above, but it is certainly possible to get an infinite delay.

    The reason is that locks provide two things: Mutual exclusion and memory visibility. Mutual exclusion means that only one thread can lock each object at a time (e.g. _lockForGetItem). Memory visibility means that when thread A writes a value while holding the lock on an object, the change will be visible to thread B when thread B acquires the lock.

    The issue is that you have to lock the same object in both threads to get this synchronization guarantee. When thread A updates _item while holding _lockForPostItem and _lockForChangeValue, there is no guarantee that thread B will ever see the update, since WeHaveItem() doesn't lock either of these objects.

    If this is the issue you are seeing, it also explains why it helps to lock _lockForPostItem or _lockForChangeValue inside WeHaveItem().

    An alternative solution is to declare _item volatile, which ensures that changes to _item are visible across threads.