I have the following cloudformation
template. Some resources like RestAPI omitted for brevity. My Template must create lambda function and download existing layer from S3 bucket. The are preinstalled python packages in this layer. They use layers to reduce the size of deployment, since in most cases you usually change your functions code instead of environment. I follow official guide: AWS::Serverless::LayerVersion
Resources:
MyLambdaLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: lambda-layer
Description: Lambda Layer
ContentUri: s3://mybucket-in-aws/packages/lambda-layer1.zip
CompatibleRuntimes:
- python3.11
HealthCheckFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/api/health.check
Description: API health check handler
Layers:
- !Ref MyLambdaLayer
Events:
HealthCheckEvent:
Type: Api
Properties:
Path: /health-check
Method: get
RestApiId: !Ref RestAPI
When I deploy this template, using serverless framework:
sam build
sam deploy
I get the error Resource handler returned message: "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. (Service: AWSLambdaInternal Status Code: 400; Error Code: InvalidParameterValueException
.
The package exists in S3 Bucket, so it is likely that the problem is with permissions.
How to change the template to allow the Cloudformation to download package layer from s3 bucket during deployment? The bucket belongs to me, my regular user can access it from aws cli
.
Is it possible to create special bucket to keep layer packages only, and upload s3 object to it during the same deployment stage, if local file changed?
Couple of questions going on here.
Question 1: How to change the template to allow the Cloudformation to download package layer from s3 bucket during deployment?
Answer 1: You can't do anything in the template necessarily, unless you have also created the S3 bucket, where you could apply a policy document. However as the comment on your question suggests, the file not existing is the problem. If it were permissions you should get an access denied response.
Question 2: Is it possible to create special bucket to keep layer packages only, and upload s3 object to it during the same deployment stage, if local file changed?
Yes, completely reasonable. However you haven't specified what you're doing your deployment with. Assuming you're working through with AWS, they provide a tutorial here: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-s3deploy.html (Scroll down to Option 2 of this page)