Search code examples
c++linuxmultithreadinggccatomic

thread safe data exchange between threads/shared memory in C++ on linux


I got a "bit" confused: In production we have two processes communicating via shared memory, a part of data exchange is a long and a bool. The access to this data is not synchronized. It's been working fine for a long time and still is. I know modifying a value is not atomic, but considering that these values are modified/accessed millions of times this had to fail?

Here is a sample piece of code, which exchanges a number between two threads:

#include <pthread.h>
#include <xmmintrin.h>

typedef unsigned long long uint64;
const uint64 ITERATIONS = 500LL * 1000LL * 1000LL;

//volatile uint64 s1 = 0;
//volatile uint64 s2 = 0;
uint64 s1 = 0;
uint64 s2 = 0;

void* run(void*)
{
    register uint64 value = s2;
    while (true)
    {
        while (value == s1)
        {
        _mm_pause();// busy spin
        }
        //value = __sync_add_and_fetch(&s2, 1);
        value = ++s2;
    }
 }

 int main (int argc, char *argv[])
 {
     pthread_t threads[1];
     pthread_create(&threads[0], NULL, run, NULL);

     register uint64 value = s1;
     while (s1 < ITERATIONS)
     {
         while (s2 != value)
         {
        _mm_pause();// busy spin
         }
        //value = __sync_add_and_fetch(&s1, 1);
        value = ++s1;
      }
}

as you can see I have commented out couple things:

//volatile uint64 s1 = 0;

and

//value = __sync_add_and_fetch(&s1, 1);

__sync_add_and_fetch atomically increments a variable.

I know this is not very scientific, but running a few times without sync functions it works totally fine. Furthermore if I measure both versions sync and without sync they run at the same speed, how come __sync_add_and_fetch is not adding any overhead?

My guess is that compiler is guaranteeing atomicity for these operations and therefore I don't see a problem in production. But still cannot explain why __sync_add_and_fetch is not adding any overhead (even running in debug).

Some more details about mine environment: ubuntu 10.04, gcc4.4.3 intel i5 multicore cpu.

Production environment is similar it's just running on more powerful CPU's and on Centos OS.

thanks for your help


Solution

  • Basically you're asking "why do I see no difference in behavior/performance between

    s2++;
    

    and

    __sync_add_and_fetch(&s2, 1);
    

    Well, if you go and look at the actual code generated by the compiler in these two cases, you will see that there IS a difference -- the s2++ version will have a simple INC instruction (or possibly an ADD), while the __sync version will have a LOCK prefix on that instruction.

    So why does it work without the LOCK prefix? Well, while in general, the LOCK prefix is required for this to work on ANY x86-based system, it turns out its not needed for yours. With Intel Core based chips, the LOCK is only needed to synchronize between different CPUs over the bus. When running on a single CPU (even with multiple cores), it does its internal synchronization without it.

    So why do you see no slowdown in the __sync case? Well, a Core i7 is a 'limited' chip in that it only supports single socket systems, so you can't have multiple CPUs. Which means the LOCK is never needed and in fact the CPU just ignores it completely. Now the code is 1 byte larger, which means it could have an impact if you were ifetch or decode limited, but you're not, so you see no difference.

    If you were to run on a multi-socket Xeon system, you would see a (small) slowdown for the LOCK prefix, and could also see (rare) failures in the non-LOCK version.