I created a swarm on AWS by running
docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-region "us-west-2" --engine-opt dns=8.8.8.8 aws-mh-keystore
eval "$(docker-machine env aws-mh-keystore)"
docker run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-region "us-west-2" --engine-opt dns=8.8.8.8 --engine-label n_type=master --swarm --swarm-master --swarm-strategy "spread" --swarm-discovery="consul://$(docker-machine ip aws-mh-keystore):8500" --engine-opt="cluster-store=consul://$(docker-machine ip aws-mh-keystore):8500" --engine-opt="cluster-advertise=eth0:2376" aws-swarm-master
eval $(docker-machine env --swarm aws-swarm-master)
And I created two nodes using the following command
docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-region "us-west-2" --engine-opt dns=8.8.8.8 --engine-label n_type=worker --swarm --swarm-discovery="consul://$(docker-machine ip aws-mh-keystore):8500" --engine-opt="cluster-store=consul://$(docker-machine ip aws-mh-keystore):8500" --engine-opt="cluster-advertise=eth0:2376" aws-swarm-node-01
docker-machine create --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY_ID --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY --amazonec2-vpc-id $AWS_VPC_ID --amazonec2-region "us-west-2" --engine-opt dns=8.8.8.8 --engine-label n_type=worker --swarm --swarm-discovery="consul://$(docker-machine ip aws-mh-keystore):8500" --engine-opt="cluster-store=consul://$(docker-machine ip aws-mh-keystore):8500" --engine-opt="cluster-advertise=eth0:2376" aws-swarm-node-02
After that I use the following command to build my multi-container app using:
docker-compose up --build
At this stage, the build process completes successfully and everything seems fine.
However, when I run
docker-compose ps
I see that the exposed ports are 0.0.0.0:<> -> tcp:<> The ideal behavior should involve a port mapping from the container to the EC2 host.
My docker-compose file is as follows
version: "2"
services:
web_api:
build:
context: .
dockerfile: Dockerfile
# args: ["constraint:engine.labels.n_type==master"]
hostname: web_api
ports:
- "5000:5000"
volumes:
- .:/code
links:
- worker
depends_on:
- worker
# Redis
redis:
build:
context: .
dockerfile: Dockerfile-redis
# image: redis
hostname: redis
ports:
- "6379:6379"
# Celery worker
worker:
build:
context: .
dockerfile: Dockerfile-celery
# args: ["constraint:engine.labels.n_type==worker"]
volumes:
- .:/app
links:
- redis
depends_on:
- redis
# environment:
# - "constraint:engine.labels.n_type == worker"
command: ./run_celery.sh
Why is the AWS port mapping not being assigned? I have established the correct inbound rules for the security group being used.
The problem here was that I had not allowed ICMP communications in my security group configuration. For some reason, that was causing my problems. Once I configured the inbound rules for ICMP in my security group correctly, I was able to access my docker containers.