Due to a race condition, I need to lock our datastore during both read & write ops, and according to the docs, this is achieved by using transactions. After implementing it as directed, the race condition didn't go away.
After debugging/verifying other parts of the code, I wrote a simple function to test the concurrent behavior (simplified for brevity):
const ds = new Datastore();
const key = ds.key(["some_kind", "some_id"]);
function main() {
console.log("call 1");
test(1);
console.log("call 2");
test(2);
}
async function test(n) {
const transaction = ds.transaction();
await transaction.run();
console.log("inside transaction ", n);
const res = await transaction.get(key);
console.log("got token ", n);
transaction.save({ key: key, data: res[0] });
console.log("called save ", n);
await new Promise(resolve => setTimeout(resolve, 2 * 1000));
console.log('slept ', n);
await transaction.commit();
console.log("committed transaction ", n);
}
Running this, I get:
call 1
call 2
inside transaction 2
inside transaction 1
got token 2
called save 2
got token 1
called save 1
slept 2
slept 1
committed transaction 2
committed transaction 1
I was instead expecting something like this, where the first process that acquired the lock, via the await transaction.run()
call, would delay other processes requesting the lock:
call 1
call 2
inside transaction 2
inside transaction 1
got token 2
called save 2
slept 2
committed transaction 2
got token 1
called save 1
slept 1
committed transaction 1
Am I misinterpreting the docs regarding how locking works in Datastore? Or is there something wrong with my implementation?
The project uses:
This is working as intended. Transaction 2 runs until it get blocked fetching the shared entity. Then when the server has committed transaction 1 (somewhere between committing token 1 & committed transaction 1), the response to got token 2 is send and is being processed.
This is to say, transaction 1 is fully committed before transaction 2 gets any data for the entity.