I'd like some help to understand what I'm missing here.
I'm using a SQS FIFO queue with visibility timeout set to 2 minutes with a lambda trigger. Right now I'm running a super simple code, it justs sends 4 messages, then the lambda set on the trigger reads the queue and deletes the message. Here is some of its code:
def lambda_handler(event, context):
print('event: ', event)
sqs = boto3.client('sqs')
queue_url = "https://sqs..."
response = sqs.receive_message(QueueUrl=queue_url, MaxNumberOfMessages=1)
new_receipt_handle = event['Records'][0]['receiptHandle']
print(new_receipt_handle)
sqs.delete_message(QueueUrl=queue_url, ReceiptHandle=new_receipt_handle)
return {
'statusCode': 200,
'body': 'Executions processed successfully'
}
In my use case I want a max of two lambdas running at a given time, and I'm expection a new lambda to execute after a running one processes the message and deletes it. To archieve this I've set batch size to 1 and maximum concurrency to 2 in the lambda configuration.
But the behavior I'm seeing when I enqueue the 4 messages is: two messages execute correctly (I'm seeing the cloudwatch logs) and the other ones stay in the "in flight" state until the visibility timeout ends, and then they are executed and deleted. I was expecting the 2 latter messages to run as the first ones got deleted, not waiting the visibility timeout.
Any idea why this is happening? What would be the correct way to setup such queue, where only two messages are processed at a time and new messages on the queue start processing immediately as they are deleted.
When Using Lambda with Amazon SQS - AWS Lambda, the AWS Lambda service is responsible for:
event
parameterThe code you write in the Lambda function should not call the SQS queue directly. In fact, it does not require any SQS permissions.
Your code should do this to retrieve the message(s):
def lambda_handler(event, context):
for record in event['Records']:
body = record['body']
# Do something with body