I'm trying to deploy two services on a single ec2 instance with docker-machine and docker-compose.
Here's what I'm doing:
docker-machine create --driver amazonec2 --engine-install-url=https://web.archive.org/web/20170623081500/https://get.docker.com mymachine
docker-machine ssh mymachine -- mkdir -p /home/ubuntu/myapp
git clone https://github.com/myapp/service1.git
docker-machine scp -r ./service1 mymachine:/home/ubuntu/myapp/
rm -rf ./service1
git clone https://github.com/myapp/service2.git
docker-machine scp -r ./service2 mymachine:/home/ubuntu/myapp/
rm -rf ./service2
docker-machine env mymachine
//export DOCKER_TLS_VERIFY="1"
//export DOCKER_HOST="something"
//export DOCKER_CERT_PATH="something"
//export DOCKER_MACHINE_NAME="mymachine"
eval $(docker-machine env mymachine)
docker-machine active
//mymachine
docker-compose -f ./docker-compose-prod.yml up -d
I get this error: build path /home/ubuntu/myapp/service1 either does not exist, is not accessible, or is not a valid URL.
relevant parts of docker-compose-prod.yml:
version: '3'
services:
service1:
build: /home/ubuntu/myapp/service1
service2:
build: /home/ubuntu/myapp/service2
The path is fine when checking through ssh, it seems like docker-compose is still trying to work on my local machine, it's happy when I provide it with a build path that exists locally. Docker itself executes commands on the remote machine.
How do I get docker-compose to run on the remote docker-machine?
I'm new to this, so hopefully I'm missing something trivial. Thanks for the help!
A docker build
(including docker-compose build
) involves a "build context". This context is all of the files you select to send from the client to the docker engine, including the Dockerfile, to perform the build. You can remove files from this context with a .dockerignore
file.
When you run a docker build /home/ubuntu/myapp/service1
or in your case, include that directory in the compose.yml file, you define /home/ubuntu/myapp/service1 as your build context that you send from the client to the docker engine. That engine may be local or a remote node, which in your case is the ec2 instance. From there, everything runs remotely, including any COPY
or ADD
commands in your Dockerfile that reference this context.
To run your build remotely, you can either leave the context on your local machine, rather than running your rm
, or you can ssh into the ec2 instance and run the docker-compose commands locally on that machine (you may need to install docker-compose there, I'm not sure it's included in the default machine image). My preference would typically be the former since it allows easier development on the files used to build your image, and it allows the remote docker machine to be ephemeral.