Search code examples
amazon-web-servicesamazon-ecsaws-code-deployaws-application-load-balancer

How does CodeDeploy work with dynamic port mapping?


It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc

As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.

What all our AWS infra is :

  • We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.

  • We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)

  • We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.

What we may have in the futur :

  • Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.

What's the problem :

Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error

service xxxx-xxxx was unable to place a task because no container instance met all of its requirements. The closest matching container-instance yyyy is already using a port required by your task.

Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?

I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...

-- Edit 02/18/2021 --

So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random

Schema

But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).

--

Not specifying hostPort seems to assign a "random" port on the host

enter image description here

But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something


Solution

  • You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.

    I've tried to visualize the conflict:

    Port Conflict

    The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.

    You don't want to explicitly map these container ports to the host port, let ECS worry about that.

    Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.

    Service Definition docs (search for containerPort)