Search code examples
c#multithreadingconcurrency.net-4.5readerwriterlockslim

ConcurrentBag vs Custom Thread Safe List


I have a .NET 4.5 Single Instance WCF service which maintains a collection of items in a list which will have simultaneous concurrent readers and writers, but with far more readers than writers.

I am currently deciding on whether to use the BCL ConcurrentBag<T> or use my own custom generic ThreadSafeList class (which extends IList<T> and encapsulates the BCL ReaderWriterLockSlim as this is more suited for multiple concurrent readers).

I have found numerous performance differences when testing these implementations by simulating a concurrent scenario of 1m readers (simply running a Sum Linq query) and only 100 writers (adding items to the list).

For my performance test I have a list of tasks:

List<Task> tasks = new List<Task>();

Test 1: If I create 1m reader tasks followed by 100 writer tasks using the following code:

tasks.AddRange(Enumerable.Range(0, 1000000).Select(n => new Task(() => { temp.Where(t => t < 1000).Sum(); })).ToArray());
tasks.AddRange(Enumerable.Range(0, 100).Select(n => new Task(() => { temp.Add(n); })).ToArray());

I get the following timing results:

  • ConcurrentBag: ~300ms
  • ThreadSafeList: ~520ms

Test 2: However, if I create 1m reader tasks mixed with 100 writer tasks (whereby the list of Tasks to be executed could be {Reader,Reader,Writer,Reader,Reader,Writer etc}

foreach (var item in Enumerable.Range(0, 1000000))
{
    tasks.Add(new Task(() => temp.Where(t => t < 1000).Sum()));
    if (item % 10000 == 0)
        tasks.Add(new Task(() => temp.Add(item)));
}

I get the following timing results:

  • ConcurrentBag: ~4000ms
  • ThreadSafeList: ~800ms

My code for getting the execution time for each test is as follows:

Stopwatch watch = new Stopwatch();
watch.Start();
tasks.ForEach(task => task.Start());
Task.WaitAll(tasks.ToArray());
watch.Stop();
Console.WriteLine("Time: {0}ms", watch.Elapsed.TotalMilliseconds);

The efficiency of ConcurrentBag in Test 1 is better and the efficiency of ConcurrentBag in Test 2 is worse than my custom implementation, therefore I’m finding it a difficult decision on which one to use.

Q1. Why are the results so different when the only thing I’m changing is the ordering of the tasks within the list?

Q2. Is there a better way to change my test to make it more fair?


Solution

  • Why are the results so different when the only thing I’m changing is the ordering of the tasks within the list?

    My best guess is that Test #1 does not actually read items, as there is nothing to read. The order of task execution is:

    1. Read from shared pool 1M times and calculate sum
    2. Write to shared pool 100 times

    Your Test # 2 mixes the reads and writes and this is why, I am guessing, you get a different result.

    Is there a better way to change my test to make it more fair?

    Before you start tasks, try randomising order of the tasks. It might be difficult to reproduce the same result, but you may get closer to real world usage.

    Ultimately, your question is about difference of optimistic concurrency (Concurrent* classes) vs pessimistic concurrency (based on a lock). As a rule of thumb, when you have low chances of simultaneous access to a shared resource prefer optimistic concurrency. When the chances of simultaneous access are high prefer pessimistic one.