I've written 3 different ways of computing a sum of an array of integer, however, I'm getting a different result for the third method.
Initialization:
int n = 100;
int[] mArray = new int[n];
for (int i = 0; i < mArray.Length; i++)
mArray[i] = 1;
First:
int sum1 = mArray.Sum();
Console.WriteLine("sum1 " + sum1);
Second:
int sum2 = 0;
for (int i = 0; i < mArray.Length; i++)
sum2 += mArray[i];
Console.WriteLine("sum2 " + sum2);
Third:
int sum3 = 0;
Parallel.ForEach(mArray, item =>
{
sum3 += item;
});
Console.WriteLine("sum3 " + sum3);
Obviously, the 3 approaches gave the same output as seen below :
However, when n is increased (e.g., n = 30000), the third approach gives surprisingly false results
NB: I've tested the approaches using ConcurrentBag which is a thread-safe collection. I suppose, there is no overflow issue. The code is tested on a windows 10 x64 computer (Intel core I-7 @ 3.30ghz)
It would be interesting to understand why Parallel.For behaves differently.
I'have Nick's solution and it fixed the problem, however, there was a performance issue when using
lock (objLock) { sum3 += item; }
directly in the Parallel.ForEach, as shown in the figure below
Fortunatly, using Parallel Aggregation Operations, as properly defined in .Net solved the issue. Here is the code
object locker = new object();
double sum4= 0;
Parallel.ForEach(mArray,
() => 0.0, // Initialize the local value.
(i, state, localResult) => localResult + i, localTotal => // Body delegate which returns the new local total. // Add the local value
{
lock (locker) sum4+= localTotal;
} // to the master value.
);