Search code examples
amazon-dynamodbamazon-dynamodb-streams

Reading DynamoDB streams in batches


Here is a simplified version of an API, that I am developing. Different users can create, update and delete entities of some kind. And I need to periodically prepare or update one ZIP file per customer, containing the latest versions of their entities.

My idea is to store the entities in a DynamoDB table and then periodically run a batch process, which would read the changes from the table's stream. My question is how do I make sure that each subsequent batch read would continue from the correct place? That is, from the first unread event.

A bit more info:

  • I need to run this outside of an AWS Lambda function.
  • I prefer to use a DynamoDB stream, not a Kinesis data stream for this, if possible.
  • I know I can put a timestamp in the table and just read from the latest timestamp that I had (that is, not using streams at all). There are some synchronization problems with this.
  • I had implemented this before by using a second table to act like a journal. While this works, it's a bit clunky, so I wanted to see if I can use streams for this.
  • My program is in Java, but I won't mind hints for other languages or even direct API calls.

This is kind of a follow-up question to this answer: https://stackoverflow.com/a/44010290/106350.


Solution

  • With streams you have an iterator position per shard, in which you can use as a pointer to the last read position. You can read more on this here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html#Streams.Processing

    https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetShardIterator.html