Search code examples
javamultithreadingschedulingvolatilejava-memory-model

Does the java memory model prevent interleaving writes from different threads between volatile variables?


I was reading an interesting article about the memory barriers and their role in JVM concurrency, and the example implementation of Dekker's algorithm took my attention

     volatile boolean intentFirst = false;
     volatile boolean intentSecond = false;
     volatile int turn = 0;


     // code run by first thread    // code run by second thread

 1    intentFirst = true;           intentSecond = true;
 2
 3    while (intentSecond) {        while (intentFirst) {     // volatile read
 4      if (turn != 0) {               if (turn != 1) {       // volatile read
 5        intentFirst = false;               intentSecond = false;
 6        while (turn != 0) {}               while (turn != 1) {}
 7        intentFirst = true;                intentSecond = true;
 8      }                              }
 9    }                             }
10    criticalSection();            criticalSection();
11
12    turn = 1;                     turn = 0;                 // volatile write
13    intentFirst = false;          intentSecond = false;     // volatile write

The article mentions that since volatiles are sequentially consistent, the critical section is bound to be executed by one thread, which checks out with the happens-before guarantee. However, does that still hold if the two threads continue to execute the same logic in a loop? My understating is that the underlying OS scheduler may decide to pause the second thread in subsequent execution just before line 7 is executed and leave the first thread hit the critical section, and at the same moment, the OS resume the second thread and hit the critical section simultaneously. Is my understanding correct, and this example is given with the idea that this code is executed only once? If so, I'd assume that the answer to my question is "no" since volatiles are used only for memory visibility guarantees.


Solution

  • My understating is that the underlying OS scheduler may decide to pause the second thread in subsequent execution just before line 7 is executed and leave the first thread hit the critical section, and at the same moment, the OS resume the second thread and hit the critical section simultaneously.

    This will not happen. If the second thread gets suspended and the first thread is in the critical section, the intentFirst is still true since this is only set to false after the first thread leaves the critical section. So if the second thread wakes up, the intendSecond is set to true, but the second thread will be stuck in the while(intendedFirst) loop and will delay entering the critical section till the first thread has excited.

    IMHO the most interesting part of Dekkers algorithm with respect to happens-before is here:

    intentSecond=true; // volatile write
    while(intendFirst) // volatile read
    

    Modern processors have store buffers and this could lead to older stores being reordered with newer loads to a different address. That is why a [StoreLoad] is needed between the earlier store and the later load to ensure that they are sequentially consistent.