An executor of the given configuration:
ThreadPoolExecutor executor = new ThreadPoolExecutor(
1, // corePoolSize
1, // maximumPoolSize
0, // keepAliveTime
TimeUnit.MILLISECONDS, // TimeUnit
new LinkedBlockingQueue<>() // Task queue
);
Would keep the single Thread alive, while any extra tasks executed would be added to the FIFO LinkedBlockingQueue.
According to what the source code reads, everything happening in this worker Thread is always sequential.
The Runnable stack-frames that are created and inserted onto the queue would still be affected by normal memory ordering effects, so fences are still required for loads and stores, BUT...
In practice, if everything is sequential it seems clear to me, that no race conditions will ever occur, so the only issue seems to be Thread caching, also known as memory hoisting, and load simplification (for double-checking)
BUT... If Runnable stack-frames... appearing as part of their "unique scope"...
Runnable toExecuteA = () -> {
// code...
};
Runnable toExecuteB = () -> {
// code...
};
Seems to me that their reorderings would ONLY occur within the bounds of their "{}" ...frames.
Since the compile time is not aware which will be the sequence of all Runnables at the moment of execution A, and B will never get interleaved, and since the same Thread is used, the processor will ALSO not interleave the sequence.
So...
a) no need to synchronize => everything will be sequential.
b) reorderings are limited within the bounds of the Runnable frames.
c) loads may be hoisted and simplified, so memory_order_relaxed like fences are needed to keep Runnable stack-frame PO.
d) since sequentiality is enforced across all Runnables, no "acquire" and "release" like fences are needed.
Now... EVEN IF the processor devirtualizes ALL CALLS, and manages to order the sequences toExecuteA + toExecuteB + toExecuteC, etc... sequences... into a single uninterrupted sequence... the simplifications and ommitions that the processor may do afterwards will not affect the end result that may have happened still if the event loop would have finished processing.
The ONLY issue then, would be if this processor's core interacts with ANOTHER core from the same processor BEFORE the last line in the sequence of runnables is reached.
Is my observation correct??
The ONLY issue then, would be if this processor's core interacts with ANOTHER core from the same processor BEFORE the last line in the sequence of runnables is reached.
"Core" is not a Java concept.
...Runnable stack-frames...
"Stack frame" is not a Java concept either.
The only thing that matters is that there's more than one thread. The Java Language Specification is the ultimate authority for questions about interactions between threads.
Your program (what little we can see of it) has two threads; It has the executor's worker thread, and it has the thread that created the executor. Let's call that one the "main thread."
The worker will perform a sequence of run()
function calls for various objects, one after another. Interactions between those different function activations will be no different from how it would be if a single-threaded program executed the same sequence of calls. It really is going to be just one thread that does it, and if the operating system chooses to move a thread from one processor to the other, it's the operating system's responsibility to ensure that that happens transparently for your program. It's nothing you need to worry about.
Ditto for interactions that happen entirely within the main thread. It'll be the same as if that thread was the only thread.
The only place where it gets interesting is when there are interactions between tasks that you submit to the Executor and things done by the main thread. (I.E., when the two threads access the same shared variables.) In that case, it's just exactly like any other program with two threads because it is just a program with two threads. Be sure to use synchronized
blocks or ReentrantLock
s when accessing the shared variables, and you'll probably be fine.