I have a code, that copies integers to buffer1, then from buffer1 to buffer2 and then consumes all data from buffer2. It processes 1000 values in 15 seconds, which is a lot of time compared to size of input. When I remove the " Task.Delay(1).Wait() " from the second task t2, it completes quite fast. Now, my question is: is the slowdown because of two threads competing for the lock or is my code somehow faulty?
var source = Enumerable.Range(0, 1000).ToList();
var buffer1 = new BlockingCollection<int>(100);
var buffer2 = new BlockingCollection<int>(100);
var t1 = Task.Run
(
delegate
{
foreach (var i in source)
{
buffer1.Add(i);
}
buffer1.CompleteAdding();
}
).ConfigureAwait(false);
var t2 = Task.Run
(
delegate
{
foreach (var i in buffer1.GetConsumingEnumerable())
{
buffer2.Add(i);
//Task.Delay(1).Wait();
}
buffer2.CompleteAdding();
}
).ConfigureAwait(false);
CollectionAssert.AreEqual(source.ToList(), buffer2.GetConsumingEnumerable().ToList());
An update: this is just a demo code, I am blocking for 1 milisecond just to simulate some computations that take place in my real code. I put 1 milisecond there because it's such a small amount. I cannot believe that removing it makes the code complete almost immediately.
The clock has ~15ms resolution. 1ms is rounded up to 15. That's why 1000 items take ~15 seconds. (Actually, I'm surprised. On average each wait should take about 7.5ms. Anyway.)
Simulating work with sleep is a common mistake.