Search code examples
c++performanceparallel-processingg++openmp

OpenMP on for loop takes much more time than serial code


I tried parallelizing a code snippet with OpenMP, it turns out that engaging OpenMP takes 25X time for the program to finish. Is there anything wrong? How can I optimize it?

#include <iostream>
#include <cmath>
#include <random>
#include <chrono>
#include <cstdlib>
#include <omp.h>

using namespace std;

int main() {
        unsigned long long black_square = 1, digit_square = 13;
        //auto n = ((black_square)<<11) * static_cast<unsigned long long>(pow(digit_square,10));
        auto n = static_cast<unsigned long long>(1e9);
        srand(0);
        int tmp = 0;
        std::random_device rd;  // Will be used to obtain a seed for the random number engine
        std::mt19937 gen(rd()); // Standard mersenne_twister_engine seeded with rd()
        std::uniform_int_distribution<> distrib(1, 6);

        auto tStart = std::chrono::high_resolution_clock::now();
//#pragma omp parallel for schedule(static) reduction(+:tmp)
#pragma omp parallel for schedule(static) reduction(+:tmp) num_threads(8)
        for (unsigned long long i=0; i<n; i++) tmp = (tmp+(5==rand()%6))%static_cast<int>(1e9);
        //for (unsigned long long i=0; i<n; i++) tmp = (tmp+(5==distrib(gen)))%static_cast<int>(1e9);
        tmp%=static_cast<int>(1e9);
        auto tEnd = std::chrono::high_resolution_clock::now();

        cout << tmp << " obtained after " << n << " iterations in " << (tEnd-tStart).count()/1e9 << "s." << endl;
        return 0;
}

The code is compiled by g++ -o a.out -O3 -std=c++11 -fopenmp tmp.cpp where g++ has version 8.5.0 20210514. The OS is RHEL8.9 and there are 20 Intel Xeon CPUs at 2.593GHz.

The serial code on average runs in 7.4s while the parallel code on average runs in 180s. Options -O3, -O2, -O1 have similar results. Random generator mt19937 could reduce the performance gap significantly, but the parallel code is still much slower than the serial version. Increasing or decreasing n leads to similar results as well.


Update with resutls.

I tried both the array approach and the firstprivate approach as in the answers/comments.They have comparable results and both achieve true parallelism. Not test whether the random sequences from each thread in the firstprivate approach are the same yet.

void array_approach(unsigned long long n, const int nThreads) {
        int tmp = 0;
        std::random_device rd;  // Will be used to obtain a seed for the random number engine
        vector<std::mt19937> rngs;
        for (int i=0; i<nThreads*64; i++) rngs.push_back(std::mt19937(rd())); // Standard mersenne_twister_engine seeded with rd()
        std::uniform_int_distribution<> distrib(1, 6);
        auto tStart = std::chrono::steady_clock::now();
#pragma omp parallel for schedule(static) reduction(+:tmp) num_threads(nThreads)
        for (unsigned long long i=0; i<n; i++) tmp = (tmp+(5==distrib(rngs[omp_get_thread_num()*64])))%static_cast<int>(1e9);
        tmp%=static_cast<int>(1e9);
        auto tEnd = std::chrono::steady_clock::now();
        cout << tmp << " obtained after " << n << " iterations in " << (tEnd-tStart).count()/1e9 << "s." << endl;
}

void private_approach(unsigned long long n, const int nThreads) {
        int tmp = 0;
        std::random_device rd;  // Will be used to obtain a seed for the random number engine
        std::mt19937 rng(rd());
        std::uniform_int_distribution<> distrib(1, 6);
        auto tStart = std::chrono::steady_clock::now();
#pragma omp parallel for schedule(static) reduction(+:tmp) firstprivate(rng) num_threads(nThreads)
        for (unsigned long long i=0; i<n; i++) tmp = (tmp+(5==distrib(rng)))%static_cast<int>(1e9);
        tmp%=static_cast<int>(1e9);
        auto tEnd = std::chrono::steady_clock::now();
        cout << tmp << " obtained after " << n << " iterations in " << (tEnd-tStart).count()/1e9 << "s." << endl;
}

Solution

  • The rand() function is not required to be thread safe. So it isn't safe to call it from multiple threads at once, like you're doing

    glibc's version of rand() is thread safe, but it does by wrapping the entire function in a mutex. So only one thread can call rand() at a time. Since outside of the rand call, you're code does very little, virtually all the execution time will be inside rand().

    So the parallel version is not really parallel. Each thread takes turns to execute one at a time on each call to rand(). So it has no advantage over a single thread. But it's actually worse, because the threads have to fight over who gets the mutex, wake up and sleep after each call, and move the PRNG state between each CPU core's cache. So it's much worse than a single threaded.

    What you should do is create multiple PRNG instances. Have an array of the gen object, one for each thread. Each thread should use its own PRNG. Make sure each object is far enough apart in memory to not share a cache line so the PRNG state does not need to move between CPU caches.