I've been tasked with setting up a pipeline to save incoming emails (using Amazon SES) in an S3 bucket, sorted into folders by date, eg. "emails/2020-04-30", "emails/2019-05-12" etc.
What I've got in mind right now is to first store the email in the bucket, then call a Lambda function to check whether or not a folder for that date exists, create it if necessary and move the file there.
This seems like a rather roundabout way of doing it, so I'm wondering if there's a more efficient way to do this.
Thanks!
If you are using aws cli commands to upload email to particular s3 bucket, you can simply check whether particular bucket exists or not using command like:
aws s3 ls s3://emails/2020-04-30
which returns None
if it bucket was not created before. Then you can proceed creating bucket and uploading email.
Edit: I saw you updated the question telling that you are using SES. In this case, I think following steps will be helpful:
emails
) in S3 with S3 Action in SES.Hope this answer your question.
For reference you can check: link to aws cli comamnd