I have this code:
#pragma omp declare reduction(* : scalar : omp_out *= omp_in)
scalar like=1;
vector<scalar>& bigsum;
#pragma omp parallel for // reduction(* : like)
for (int m = 0; m < M-1; m++)
like *= bigsum[m];
I am trying to get a consistent result but it doesn't (race condition problem), but how should I fix it? as it is visible in the code I have my own reduction function
but it doesn't work either. Is there any trick for scalar and std::vector that I should be aware of?
Scalar variable in here is just overridden floating point by apply log() on each double that I has been created since there are so many double to double multiplications and the result after couple of them becomes close to zero. For example by doing the log() then multiplication becomes adding and etcetera.
One answer for consistent answer would be this:
#pragma omp parallel
{
scalar loc = 1;
#pragma omp for
for (std::size_t m = 1; m < M;m++)
{
_flm[m-1] = Qratio[m-1] * k1 + k2;
bigsum[m-1] = kappa0omegaproduct + kappa[1] * _flm[m-1];
#pragma omp critical (reduce_product)
{
like *= bigsum[m-1];
}
}
}
This answer is correct but so slow it is almost 8 times slower on my 8 core machine!
I have an answer myself after three days and an explanation for what I have found.
I have created my own reduction function like this:
#pragma omp declare reduction(* : scalar : omp_out *= omp_in) initializer (omp_priv (1))
The trick was the omp_priv and apparently the reduction value initialization is important, somethin I learned in here.
I made the code much simpler by applying openmp for loops like this:
#pragma omp parallel for reduction (* : like)
Very simple and clean. In this new way the code gets parallelized and runs faster than what I had in the question body. Unfortunately, it is still slower than the serial version. Maybe it is because of the std::vector usage, or the overloaded arithmetic operations are very slow!? I do not know. The code is so large I cannot copy paste it in here in a way that it would be understandable, and not be a pain in the neck to read for others.