Search code examples
amazon-web-servicestransactionsamazon-dynamodbacid

DynamoDB ACID transaction for single record


I have a DynamoDB "My_Table" with index on "lock_status":

 { 
   my_pk: string,
   lock_status: string,
   useful_data: string,
 }

Is it possible for two different thread to execute the update code below on the same record?

Essentially, I want exactly one thread to have access to any given record's "useful_data". To do this, I'm "locking" the record via lockStatus while the thread is working with this item. What I am afraid is that two thread executes this code at the same time. They both find the same record based on the "ConditionExpression" and locks the same record.

const client = new AWS.DynamoDB.DocumentClient();
return await client.update({
        TableName: 'My_Table',
        Limit: 1,
        UpdateExpression: 'set lockStatus = :status_locked', 
        ConditionExpression: 'lockStatus <> :status_available',
        ExpressionAttributeValues: {
            ':status_locked': 'LOCKED',
            ':status_available': 'AVAILABLE',
        },
        ReturnValues: 'ALL_NEW',
    }).promise();

This seems possible to avoid this problem if I was using TransactWriteItem, but can I get away with using simple update for my simple scenario?

Edit

Allow me to add a little context so that things make more sense. I'm building a "library" of reusable user accounts for testing. A test would "check out" and "check in" the user account. While the account is checked out, I want to prevent all other tests from using the same account.

One piece of information I neglected to mention in my original post was that I'm first getting the My_Table data by getting the next not locked item. Something like this:

const client = new AWS.DynamoDB.DocumentClient();
return await client.query({
        TableName: 'My_Table',
        IndexName: 'LOCK_STATUS_INDEX',
        Limit: 1,
        KeyConditionExpression: 'lockStatus = :status_available',
        ExpressionAttributeValues: { ':status_available': 'AVAILABLE' }
    }).promise();

Then in my subsequent update call, I'm locking the row as mentioned in my original post. As @maurice suggested, I was looking into the optimistic locking. As a matter of fact, this article perfectly describes a scenario that I'm facing.

However, there is a problem that I will likely run into under high load. The problem goes something like this:

  1. 10 threads come and asks for the next not locked record. All 10 threads get the same record. (This is a very possible since all I'm doing is Limit 1 and the dynamoDb will likely return the first record it runs across, which would be the same for all threads).
  2. 10 threads try to update the same record with a give version number. One thread succeeds in the update and the rest fail.
  3. 9 threads retry and goes back to step 1. (Worst case, more threads are added)

I'm starting to think that my design is flawed. Or perhaps dynamoDb is not the right technology. Any help with this problem would be useful.


Solution

  • You could use optimistic locking for this - the idea is fairly simple. You create a version attribute for your item that's an integer which will be incremented.

    {
      pk: 123
      sk: 123
      version: 0
      randomValue: abc
    }
    

    When you read the item to update it, you note the current version number. After you update the item, you also increment the version number. So if you wanted to update the random value, the item you'll write to DynamoDB would look like this:

    {
      pk: 123
      sk: 123
      version: 1
      randomValue: newValue
    }
    

    You now add a condition expression to your update or putitem call, to ensure this only succeeds, when the current version of that item is still 0.

    That way the call will fail, if somebody else updated the item while you were processing it and you can read it again, update it and write again. If the call succeeds, you know there has been nobody else that messed with the item.

    I also wrote a more detailed blog post about this if you're curious: link