I have two lambdas
and an SQS
queue in between. The first lambda's purpose is to pick product ids from aurora MySQL
and send to SQS
. There is over 7 million product ids. When the first lambda sends these product ids to SQS, I have enabled a trigger which invokes my second lambda.
The issue I am facing is that my first lambda is not able to send all product ids to queue in 1 invocation due to the time limits of lambda. I tested it and for 1 invocation it was able to send only 100k records to SQS. If I run it again obviously it will again pick the same product ids. Even if I put a limit and offset in my lambda then after 1st invocation I'll have to change offset to pick the next 100k records, this is a bit tedious. How can I automate this process?
Have you tried writing to s3 a csv file that stores the latest index/productid you have sent to SQS, which you will eventually access at the start of the next iteration of your lambda?
Here's a rough implementation of the steps: