Search code examples
javascriptamazon-web-serviceslambdaamazon-sqsserverless

How to prevent lambda triggered multiple times by SQS


I have a lambda function that is triggered by SQS, but it seems that SQS triggered it multiple times even when the operation succeeded. Here is some of my code

// handle.ts to handle sqs
exports.handler = async function (event, context, callback) {
  
  // SQS may invoke with multiple messages
  for (const message of event.Records) {
    
    // make the call to a service 
    runAsyncService(message.body)
  }


  // ran all async call all together
  return callback(null, "succeed")
};

using that handler, it seems that the request is triggered 3 times (configureable, but it reaches the max retries) each with a different RequestId (which some said, indicates that the request timed out). Then, I have this code, which fixes the issue of multiple triggers.

// new handle.ts to handle sqs event
exports.handler = async function (event, context, callback) {

  let jobs: any = [] // hold all async call

  // SQS may invoke with multiple messages
  for (const message of event.Records) {
    
    // make the call to a service 
    jobs.push(runAsyncService(message.body))
  }

  // ran all async call all together
  return Promise.all(jobs)
    .then(() => {
      console.log(`All ${jobs.length} job(s) finished`)
      return context.succeed('Finished')
    })
};

as you may see, I used a Promise.all() function to run all the async calls, then called context.succeed(). Using this method had a side effect, where if there are multiple Records from sqs, if any one of the tasks fail, then the whole promise will fail, even when other tasks are successful. Calling context.succeed() inside the loop also is not an option, because it also triggered the call multiple times. The only option I have in mind right now is limiting the batch size to 1, but I don't really like that idea. Also, I used getlift/lift to configure sqs and lambda together. Do you guys have any suggestion for me? thanks.


Solution

  • My working solution for now is to set the batch size to 1, but it's not a solution, just a work around.