I've been trying to deploy a React frontend application using a CloudFormation template, and the deployment itself appears to work, the nginx image built and ran in docker locally also works fine. However the stack never finishes deploying, it gets stuck in a loop preparing and draining the services:
CloudFormation template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
FrontendImageName:
Type: String
Default: frontend_client
Description: The name of the frontend image in AWS ECR
ClusterName:
Type: String
Default: frontend-ecs-cluster
Description: The name of the ECS cluster
ServiceName:
Type: String
Default: frontend-ecs-service
Description: The name of the ECS service
DesiredCount:
Type: Number
Default: 1
Description: The number of tasks to run
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: "10.0.0.0/16"
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: VPC
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: "10.0.1.0/24"
MapPublicIpOnLaunch: true
AvailabilityZone: !Select [0, !GetAZs ""]
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: "10.0.2.0/24"
MapPublicIpOnLaunch: true
AvailabilityZone: !Select [1, !GetAZs ""]
InternetGateway:
Type: AWS::EC2::InternetGateway
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: AWS::EC2::Route
DependsOn: AttachGateway
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: "0.0.0.0/0"
GatewayId: !Ref InternetGateway
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref PublicRouteTable
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
ALB:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Subnets:
- !Ref PublicSubnet1
- !Ref PublicSubnet2
SecurityGroups:
- !Ref ALBSecurityGroup
Scheme: internet-facing
Type: application
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
VpcId: !Ref VPC
Protocol: HTTP
Port: 80
TargetType: ip
HealthCheckPath: /health
HealthCheckPort: 80
ALBListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- Type: forward
TargetGroupArn: !Ref TargetGroup
Conditions:
- Field: path-pattern
Values:
- /*
ListenerArn: !Ref ALBListener
Priority: 1
ALBListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref TargetGroup
LoadBalancerArn: !Ref ALB
Port: 80
Protocol: HTTP
ALBSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for ALB
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
FrontendLogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub "/ecs/${ServiceName}"
ECSCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Ref ClusterName
FrontendTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: !Ref ServiceName
Cpu: "256"
Memory: "512"
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !GetAtt FrontendTaskExecutionRole.Arn
ContainerDefinitions:
- Name: frontend
Image: !Join
- ""
- - !Ref "AWS::AccountId"
- ".dkr.ecr."
- !Ref "AWS::Region"
- ".amazonaws.com/"
- !Ref FrontendImageName
- ":latest"
Essential: true
PortMappings:
- ContainerPort: 80
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref FrontendLogGroup
awslogs-region: !Ref "AWS::Region"
awslogs-stream-prefix: frontend
ECSServiceSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for ECS Service
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
SourceSecurityGroupId: !Ref ALBSecurityGroup
FrontendService:
Type: AWS::ECS::Service
DependsOn:
- ALB
- ECSServiceSecurityGroup
Properties:
Cluster: !Ref ECSCluster
ServiceName: !Ref ServiceName
TaskDefinition: !Ref FrontendTaskDefinition
DesiredCount: !Ref DesiredCount
LaunchType: FARGATE
LoadBalancers:
- ContainerName: frontend
ContainerPort: 80
TargetGroupArn: !Ref TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- !Ref ECSServiceSecurityGroup
Subnets:
- !Ref PublicSubnet1
- !Ref PublicSubnet2
FrontendTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action: "sts:AssumeRole"
Policies:
- PolicyName: FrontendTaskExecutionRolePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- ecr:GetAuthorizationToken
- ecr:BatchCheckLayerAvailability
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
Resource: "*"
- Effect: Allow
Action:
- logs:CreateLogStream
- logs:PutLogEvents
Resource: !GetAtt FrontendLogGroup.Arn
And the nginx configuration:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
server {
listen 80;
server_name localhost;
location /health {
access_log off;
return 200 "OK\n";
}
location / {
limit_req zone=mylimit burst=20 nodelay;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /latest {
return 403;
}
location /api/latest {
return 403;
}
}
And the dockerfile:
# Stage 1: Build React App
FROM node:alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
RUN npm run build
# Stage 2: Serve React App with Nginx
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Any advice appreciated.
I've tried altering port numbers on the nginx configuration and the template incase of conflict. But I can't work out what is going wrong.
Since locally it builds and works I'd expect it to work correctly when built from the ECR.
I have also tried to get the log stream working to no avail, but that is not the primary issues.
Two issues I see right away:
You don't appear to be assigning a security group to the ECS Service. That means all ports will be blocked on the service and the ALB/TargetGroup will never be able to connect to the service. You need to add a security group to the ECS service that allows inbound connections on port 8080
from the load balancer.
The Target group has the correct port 8080
configured for the health check, but it has port 80
configured for all other traffic. Both port configurations on the target group should be port 8080
. That is the port you are telling the Target Group to connect to your ECS service on, so they have to match the port your ECS service is listening on.
I've tried altering port numbers on the nginx configuration and the template incase of conflict.
This is a single container running on Fargate, so there is no possibility of a port conflict here.