I have an app which utilizes a producer/consumer pattern for processing events (2 threads total). The main (UI) thread atomically writes to a reference-type field, and my instantiated thread atomically reads the field. Here's the demo code:
public class MyApp
{
private Thread WorkerThread; // Thread which processes the events.
private EventArgs QueuedEvent; // The latest event registered by the app.
private EventArgs ProcessedEvent; // The latest event processed by the worker.
public static void Main(string[] args)
{
// Create and start the thread.
WorkerThread = new(new ThreadStart(DoWork));
WorkerThread.Start();
// Attach event
[...]
}
// Event which is fired from the main thread.
// The timing of this event is unpredictable;
// it could be super fast, never fire, or anything in between.
private static void OnEvent(object sender, EventArgs e)
{
// Set this as the latest event.
// It is OK if the previous event is LOST.
QueuedEvent = e;
}
// Main loop for consumer thread.
private void DoWork()
{
while (true)
{
// Copy QueuedEvent to local variable so it's not overwritten.
var argsToWorkOn = QueuedEvent;
// For performance, don't do work if the args haven't changed.
if (argsToWorkOn != ProcessedEvent)
{
// Do expensive work with argsToWorkOn.
[...]
// Remember that we already did work with these args.
ProcessedEvent = argsToWorkOn;
}
}
}
}
Performance in both threads is the #1 priority here; they must not block each other. Given this is case, what is objectively the most performant way to implement thread safety here? Is it:
volatile
here. I've read arguments for and against volatile, especially with respect to modern architecture.What is objectively the most performant way to implement thread safety here?
Use the volatile
keyword:
private volatile EventArgs QueuedEvent;
private volatile EventArgs ProcessedEvent;
This is the absolute winner regarding the utilization of your CPU, and the absolute performance loser regarding your own time and sanity. You would need months of study, especially study of the source code of the .NET runtime, before the confusion goes away and you have a firm understanding of the guarantees that the volatile
gives you in practice. And I say in practice because it could be said that according to the official documentation, the .NET source code itself is incorrect, because it makes assumptions that the documentation doesn't provide.
In short the volatile
keyword ensures that the CPU instructions of your code will not be reordered in a way that would allow one thread to see an EventArgs
instance in a partially initialized state. It also ensures that the compiler will not emit code that reads the QueuedEvent
only once and then never reads it again, subsequently returning always the same cached value. A thread that reads the volatile QueuedEvent
field will see a reasonably fresh value of the field, about as fresh as it would be if you were using the lock
or Interlocked
.