Search code examples
amazon-ec2dockerkubernetesdocker-composeamazon-ecs

Does Kubernetes evenly distribute across an ec2 cluster?


So, I'm trying to understand CPU and VM allocation with kubernetes, docker and AWS ecs. Does this seem right?

  1. Locally, running "docker compose" with a few services:
    each container gets added to the single Docker Machine VM. You can allocate CPU shares from this single VM.

  2. AWS, running ECS, generated from a docker compose:
    each container (all of them) gets added to a single ec2 VM. You can allocate CPU shares from that single VM. The fact that you deploy to a cluster of 5 ec2 instances makes no difference unless you manually "add instances" to your app. Your 5 containers will be sharing 1 ec2.

  3. AWS, running kubernetes, using replication controllers and service yamls:
    each get container gets distributed amongst ALL of your ec2 instances in your kubernetes cluster?????

If i spin up a cluster of 5 ec2 instances, and then deploy 5 replication controllers / services, will they be actually distributed across ec2's? this seems like a major difference from ECS and local development. Just trying to get the right facts.


Solution

  • Here are the answers to your different questions:

    1> Yes you are right,you have a single VM and any container you run will get cpu shares from this single VM. You also have the option of spawning a swarm cluster and try out. Docker compose support swarm for containers connected via a overlay network spread over multiple vms.

    2> Yes your containers defined in a single task will end up in the same ec2 instance. When you spin up more than one instances of the task, the tasks get spread over the instances part of the cluster. Non of tasks should have resource requirement which is greater than the max resource available on one of your ec2 instances.

    3> Kubernetes is more evolved than ECS in many aspects, but in case of container distribution it works similar ecs. Kubernetes pod is equivalent to a ecs task. Which is one or a group of container colocated on a single VM. In kubernetes also you cannot have a pod need resources more the max available on one of your underneath compute resources.

    In all the three scenarios, you are bound by the max capacity available on underneath resource when deploying a large container or a pod.

    You should not equate the docker platform to VM creation and management platform. All these docker platforms expect you to define tasks which fit into the VMs and require you to scale horizontally with more task count when needed. Kubernetes comes with service discovery, which allows seamless routing of requests to the deployed containers using DNS lookups. You will have build your own service discovery with swarm and ecs. CONSUL, EUREKA etc are tools which you can use for the same.