In some hi-level programming environments (java, dotnet), when accessing same memory from multiple threads, you have to explicitly mark it as volatile or synchrnozied, otherwise you could get stale results from some cache or out-of-order values due to out-of-order execution by CPU or other optimizations.
In MRI ruby for some time native OS threads are used. Each of those threads sometime execute ruby code (I assume, but not sure), even if never truly parallel because of VM lock.
I guess MRI solves this stale/ooo values issue somehow, because there is no volatile construct in ruby language and I never heard of stale value issues.
What guarantees Ruby lang or MRI specifically gives regarding memory access from multiple threads? I would be extremely grateful if someone would point me to any documentation regarding this. Thanks!
It sounds like your specific question is if Ruby implicitly provides a memory barrier when switching threads, such that all caching/reordering concerns that occur at a processor level are resolved automatically.
I believe MRI does provide this, as otherwise the GVL would be pointless; why restrict one thread to run at a time if even then they can end up reading/writing stale data? It is difficult to find the precise place where this is provided, but I believe the entry point is via RB_VM_LOCK_ENTER
which is called throughout the codebase and which ultimately calls vm_lock_enter
. This has code which strongly implies that memory barriers are in place:
// lock
rb_native_mutex_lock(&vm->ractor.sync.lock);
VM_ASSERT(vm->ractor.sync.lock_owner == NULL);
vm->ractor.sync.lock_owner = cr;
if (!no_barrier) {
// barrier
while (vm->ractor.sync.barrier_waiting) {
unsigned int barrier_cnt = vm->ractor.sync.barrier_cnt;