I have an ECS Service running a couple of tasks in a private subnet and an EC2 instance in the same VPC but in a public subnet. I want to be able to communicate with my ECS service via my EC2 instance over port 443. My ECS is configured to use a certain domain.
When I run nmap from my EC2 instance, I get the following :
nmap -Pn <domain-name> -p443
Starting Nmap 6.40 ( http://nmap.org ) at 2023-05-24 20:35 UTC
Nmap scan report for <domain-name>
Host is up.
Other addresses for <domain-name>
PORT STATE SERVICE
443/tcp filtered https
Nmap done: 1 IP address (1 host up) scanned in 2.03 seconds
However when I do the same using the IP address of the running task, I'm getting the following:
nmap -Pn <Private-IP> -p443
Starting Nmap 6.40 ( http://nmap.org ) at 2023-05-24 20:27 UTC
Host is up (0.00099s latency).
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds
When I run ping <domain-name>
I am getting the following:
6 packets transmitted, 0 received, 100% packet loss, time 5106ms
However I am able to ping the private IP of the running task from my EC2 instance.
ping <private-IP>
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
I have configured the inbound security group of my ECS Service to allow traffic from the public subnet like this.
resource "aws_security_group_rule" "inbound_https" {
security_group_id = aws_security_group.inbound.id
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["<public-IP-cidr>"]
}
resource "aws_security_group_rule" "inbound_icmp" {
security_group_id = aws_security_group.inbound.id
type = "ingress"
from_port = 0
to_port = 0
protocol = "icmp"
cidr_blocks = ["<public-IP-cidr>"]
}
And I can see from the UI that they were applied successfully.
I am confused because the ingress is allowed, yet I am still not able to open port 443 into my ECS service via my EC2 instance. I checked the NACL and they allow all traffic from all ports. What am I missing?
The Task definition is as follows:
{
"taskDefinitionArn": "<TASK-DEF-ARN>",
"containerDefinitions": [
{
"name": "<NAME>",
"image": "<IMAGE>",
"cpu": 1024,
"memory": 2048,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-group",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "awslogs-stream-prefix"
}
}
}
],
"family": "<FAMILY>",
"taskRoleArn": "<TASK-ROLE-ARN>",
"executionRoleArn": "<EXECUTION-ROLE-ARN>",
"networkMode": "awsvpc",
"revision": 63,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "2048",
"registeredAt": "2023-05-24T21:25:20.965Z",
"registeredBy": "<REGISTERED-BY>",
"tags": [
{
"key": "Environment",
"value": "dev"
},
{
"key": "Region",
"value": "us-east-1"
},
{
"key": "Service",
"value": "My Awesome Service"
},
{
"key": "Stage",
"value": "<STAGE>"
}
]
}
I even opened up port 80
resource "aws_security_group_rule" "lcp_inbound_http" {
security_group_id = aws_security_group.lcp_inbound.id
type = "ingress"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["<public-IP-cidr>"]
}
And the nmap
is still filtered.
nmap -Pn -p80 <DNS>
PORT STATE SERVICE
80/tcp filtered http
Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds
I had to use the <public-IP-Address>
in the inbound rule and that worked.