Search code examples
javamultithreadingsynchronizedjava-memory-modelhappens-before

wait() and notify() JMM semantics


I have a very particular question I couldn't find an answer to.

As we know, on entrance in synchronized blocks a thread performs a re-read of all the shared (non-local) variables in it's scope. Example for certain underlying architectures: if thread A updates the object's state in RAM thread B entering a synchronized block will see the changes. Similar thing happens on exit from synchronized block: the thread flushes everything in it's scope to the RAM so it can be seen by other threads. Those are basic JVM visibility guarantees and the happens-before rules are present to enforce it.

However, semantically it is not very clear if the code using wait() or notify() also does all of this: after all it does not explicily enter or leave the synchronized block.

The questions are these:

  1. Does the JVM ensure the visibility of changes to other threads on wait() entrance?
  2. Does the JVM ensure the visibility of changes made in other threads on wait() leave?
  3. Does the thread ensure the visibility of changes to other threads on notify()?

Solution

  • As we know, on entrance in synchronized blocks a thread performs a re-read of all the variables in it's scope; that is, if thread A updates the object's state in RAM thread B entering a synchronized block will see the changes.

    The first part of that is incorrect. All variables are not reread. Local variables definite won't be. They don't need to be reread because they are never visible to another thread.

    The correct statement is the compiler will ensure that shared variables that were written by thread A before exiting the block will be visible to thread B after it has entered the block. Provided that A and B are synchronizing on the same mutex object, and provided that A (or some other thread) did not overwrite them in the meantime.

    There are no explicit memory semantics associated with a notify or notifyAll. However, a wait will cause the mutex to be released and (typically) re-acquired. The release and re-acquisition have associated happens-before relations with with some other thread.


    Could you, please, elaborate on exact semantics associated with releasing and acquiring a lock? Are they one and same with entering the synchronized block?

    Lets assume we have just two threads, A and B, and a single mutex L. Lets assume that we start with neither thread holding the mutex.

    Also remember that wait and notify can only be called by a thread that holds a lock

    1. Thread A acquires L.
    2. Thread A calls L.wait().
      • Thread A is placed on the wait queue for L, and L is released.
    3. Thread B acquires L.
    4. Thread B calls L.notify()
      • Thread A is moved on the queue of threads waiting to acquire L.
    5. Thread B releases L.
    6. Thread A re-acquires the lock, and the L.wait() call returns.

    The happens-before edges that matter here are between 2 and 3, and then between 5 and 6.

    If there are multiple threads involved, you can analyze the behavior by chaining the happens-before relations. But there is only a direct HB between the thread that releases the mutex and the next thread to acquire it ... by any means.


    So, the answers to your questions are:

    1) & 2) Yes, assuming that the other thread is using synchronized correctly. 3) No. The visibility point is when the mutex is released by the thread that called notify().


    Note that memory barriers, flushes and so on are implementation details. In fact, a compiler is free to implement the semantics of happens-before any way that it wants to. Including (hypothetically) optimizing away memory flushes if they are not necessary.

    It is best (IMO) to ignore these implementation details and only think about the happens-before relationships.

    For more information on happens-before, read: