I've got this legacy pseudocode:
public class Wrapper {
private volatile Sink sink;
public synchronized void flushSink() {
if (sink != null) {
synchronized (sink) {
sink.flush();
}
}
}
public void close() throws IOException {
var sink = this.sink;
if (sink != null) {
sink.receivedLast();
}
}
}
My question is about nested synchronized
block.
As far as I understand the mentioned block is redundant in this case, as if two threads concurrently call flushSink()
, then only one of them has access to private field sink
at a time. As a result the code could be simplified to
public class Wrapper {
private volatile Sink sink;
public synchronized void flushSink() {
if (sink != null) {
sink.flush();
}
}
}
with possible further elimination of racy read into
public class Wrapper {
private volatile Sink sink;
public synchronized void flushSink() {
var sink = this.sink;
if (sink != null) {
sink.flush();
}
}
}
So my question is whether this kind of refactoring correct in terms of synchronization and JMM and if it is (alternatively if it is not) - is there any JCStress-based test proving it?
Finally I found open source code exactly repeating the case, see java.io.PipedOutputStream
:
@Override
public synchronized void flush() throws IOException {
if (sink != null) {
synchronized (sink) {
sink.notifyAll();
}
}
}
The code looks exactly like the snippet I was investigating, and if I do the same refactoring PipedOutputStream
-related JDK tests will fail.
The transformation is illegal, because synchronized
on flush()
method assumes locking on this
i.e. on PipedOutputStream
object, and synchronized (sink)
locks on another object. Thus we have two different locks and lock coarsening is not allowed in this case.