I am writing a lambda function in python using serverless. The lambda function gets triggered when a file is created in a s3 bucket. the lambda function transforms the file and then puts it into another bucket. the source s3 bucket is defined in another stack, and the destination bucket is defined in lambda function's stack. I've been trying to find an example to see how to specify source buckets in environment variable and also under events for the lambda function?
service: Service
provider:
name: aws
runtime: python2.7
stage: ${opt:stage, 'dev'}
region: us-east-1
role: LambdaRole
functions:
LambdaFunction:
name: lambda-function
handler: handler.lambda_handler
environment:
DESTINATION_BUCKET: !Ref DestinationBucket
SOURCE_BUCKET: !Ref AnotherStack.SourceBucket # Is this correct?
event:
s3:
name: SOURCE_BUCKET # How do I reference a bucket from another stack here?
# ... other event trigger releated stuff
resources:
Resources:
LambdaRole:
Type: AWS:IAM:Role
Properties:
# .... lambda role permissions
DestinationBucket:
DeletionPolicy: Retain
Type: AWS::S3::Bucket
Properties:
AccessControl: BucketOwnerFullControl
BucketName:
!Join ['-', [destination-bucket !Ref AWS::AccountId]]
VersioningConfiguration:
Status: Suspended
SOURCE_BUCKET is defined in another stack with name "AnotherStack". Name of the bucket is "SourceBucket". If possible I do not want to hard code stack name in serverless's yml and do something similar to what cloudformation template offers with parameters. So the question is how do I refer it here in the lambda's serverless .yml.
Use SNS Topic that triggers lambda function. Create event on S3 bucket to trigger SNS notification to the topic. This also makes it easier if the same lambda function has to be triggered for events from multiple s3 buckets. This will avoid issues during deployment time and not fail the deployment because the event in the serverless.yml is being used at multiple places as an export.