Search code examples
multithreadinglockingthread-safetymulticoreshared-memory

Should access to a shared resource be locked by a parent thread before spawning a child thread that accesses it?


If I have the following psuedocode:

sharedVariable = somevalue;
CreateThread(threadWhichUsesSharedVariable);

Is it theoretically possible for a multicore CPU to execute code in threadWhichUsesSharedVariable() which reads the value of sharedVariable before the parent thread writes to it? For full theoretical avoidance of even the remote possibility of a race condition, should the code look like this instead:

sharedVariableMutex.lock();
sharedVariable = somevalue;
sharedVariableMutex.unlock();
CreateThread(threadWhichUsesSharedVariable);

Basically I want to know if the spawning of a thread explicitly linearizes the CPU at that point, and is guaranteed to do so.

I know that the overhead of thread creation probably takes enough time that this would never matter in practice, but the perfectionist in me is afraid of the theoretical race condition. In extreme conditions, where some threads or cores might be severely lagged and others are running fast and efficiently, I can imagine that it might be remotely possible for the order of execution (or memory access) to be reversed unless there was a lock.


Solution

  • I would say that your pseudocode is safe on any correctly functioning multiprocessor system. The C++ compiler cannot generate a call to CreateThread() before sharedVariable has received a correct value unless it can prove to itself that doing so is safe. You are guaranteed that your single-threaded code executes equivalently to a completely non-reordered linear execution path. Any system that "time warps" the thread creation ahead of the variable assignment is seriously broken.

    I don't think declaring sharedVariable as volatile does anything useful in this case.