Here is an example of a mutex lock.
#include <pthread.h>
pthread_mutex_t count_mutex;
long long count;
void increment_count()
{
pthread_mutex_lock(&count_mutex);
count = count + 1;
pthread_mutex_unlock(&count_mutex);
}
long long get_count()
{
long long c;
pthread_mutex_lock(&count_mutex);
c = count;
pthread_mutex_unlock(&count_mutex);
return (c);
}
The document says that "The increment_count() function uses the mutex lock simply to ensure an atomic update of the shared variable". Ok, that's fine.
I have a problem with the way it explains the use of locking in get_count(): "The get_count() function uses the mutex lock to guarantee that the 64-bit quantity count is read atomically".
If I'm not wrong, right after unlocking count_mutex
in get_count
another thread can call increment_count
and make the result from get_count
incorrect. That's inevitable, right? Then why not just make this?
long long get_count()
{
return count;
}
The risk is not the wrong value, but an incoherent value.
Ie, getting a value that either count never held.
Suppose count goes from -1 to 0. And the reader gets the high bytes while it is -1 and the low bytes while it is 0. The reader reads 0xffffffff00000000, which is roughly 2^-32.
This is significantly different than any value it has ever held.
Things get even worse on some platforms. One problem is multi-core cache consistency. Locking operations map to not only mutually exclusive critical sections, but also requiring cache lines to be made consistent between CPU memory caches.
You can literally never see a change to a memory location if you don't get a lock in your thread.