I have some thread to write resource and some to read it.But pthread_rwlock cause a lot of context switch. So I imagine a way to avoid it. But I'm not sure it is safe or not.
This is the code:
sig_atomic_t slot = 0;
struct resource {
sig_atomic_t in_use; /*Counter,if in_use, not zero*/
.....
} xxx[2];
int read_thread()
{
i = slot; /*avoid slot changes in process */
xxx[i].in_use++;
read(xxx[i]);
xxx[i].in_use--;
}
int write_thread()
{
mutex_lock; /*mutex between write threads */
if (slot == 0) {
while(xxx[1].in_use != 0); /*wait last read thread in slot 1*/
clear(xxx[1]);
write(xxx[1]);
slot = 1;
} else if (slot == 1) {
while(xxx[0].in_use != 0);
clear(xxx[0]);
write(xxx[0]);
slot = 0;
}
mutex_unlock;
}
Will that works? The cost is 2 times storage and 3 atomic variable. Thanks a lot!
Your strategy is to have writers write to a different slot than what the readers are reading from. And you are switching the reading slot number after a write is completed. However, you will have a race.
slot reader writer1 writer2
---- ------ ------- -------
0 mutex_lock
i = 0
... slot=1
1 mutex_unlock mutex_lock
... clear(xxx[0])
xxx[0].in_use++
read(xxx[0]) write(xxx[0])
In general, though, this strategy could lead to starvation of writers (that is a writer may spin forever).
However, if you are willing to tolerate that, it would be safer to let xxx[]
be an array of 2 pointers to resource
. Let the reader always read from xxx[0]
, and let the writers contend for updates on xxx[1]
. When a writer is finished updating xxx[1]
, it uses CAS on xxx[0]
and xxx[1]
.
struct resource {
sig_atomic_t in_use; /*Counter,if in_use, not zero*/
sig_atomic_t writer;
.....
} *xxx[2];
void read_thread()
{
resource *p = xxx[0];
p->in_use++;
while (p->writer) {
p->in_use--;
p = xxx[0];
p->in_use++;
}
read(*p);
p->in_use--;
}
void write_thread()
{
resource *p;
mutex_lock; /*mutex between write threads */
xxx[1]->writer = 1;
while(xxx[1]->in_use != 0); /*wait last read thread in slot 1*/
clear(xxx[1]);
write(xxx[1]);
xxx[1] = CAS(&xxx[0], p = xxx[0], xxx[1]);
assert(p == xxx[1]);
xxx[0]->writer = 0;
mutex_unlock;
}
If you want to avoid writer starvation, but you want the performance of spinlocks, you are looking at implementing your own reader/writer locks using spinlocks instead of mutex locks. A google search for "read write spinlock implementation" pointed to this page which I found to be an interesting read.