I'm incredible confused between Docker Hub, Cloud, Swarm, Swarm Mode, docker deploy, docker-compose Deploy, ...
What is the simplest docker deployment practice for a production website that fits comfortably within the capabilities of a single physical server?
The site in question has a comprehensive docker-compose.yml that starts up some 12 services covering various web servers, webpack builders and DB. An environment variable is used to control for dev or production.
A command-line tool is used to upload Webpack bundles to S3 bucket, and sourcemaps to Sentry. The bundle hash is used as a release ID, which is stored in an environment variable (i.e. HTML is written with <script src="https://s3.site.com/c578f7cbbf76c117ca56/bundle.js">
where the hash c57...
is written into the environment variable file pointed to by each service in docker-compose.yml
).
I don't need more than one server, nor comprehensive failover strategies. I just want to avoid downtime when deploying code updates. I'm a single developer so I don't need CI or CD.
I understand docker-machine is deprecated. Docker Hub deals with images individually, so I understand I need something that deals with the concept of a "stack", or a set of related services. I understand that Docker Cloud's stack.yml files don't support build
or env_file
keys, so my docker-compose.yml is not directly usable
(In my docker-compose.yml
I have many occurrences of the following pattern:
build:
context: .
dockerfile: platforms/frontend/server/Dockerfile
and in the Dockerfile, for example:
COPY platforms/frontend/server /app/platforms/frontend/server
Without the separation of build context and Dockerfile location, the compose file doesn't seem to translate to stack file).
Furthermore, I think that Docker Cloud / Swarm are for managing multiple fail-over servers and round-robin routing and so on? I don't think I need any of this.
Finally I started to realise docker-compose deploy
exists... is this the tool/strategy I'm after?
Let me correct some things first, and then I'll get into the expected Docker strategy in this case where you say you "don't need CI/CD", which I assume means you'll manually deploy updates to the server yourself. This workflow won't be what I suggest for a team, but for the "solo dev" it's great and easy.
"I understand docker-machine is deprecated."
Not true. It gets constant updates, including a version last month. It's not designed for deploying/managing many servers, but if you really only need a single server for a single admin, it can be perfect for creating the instance remotely, installing docker, and setting up TLS certs for remote access via docker CLI: docker-machine env <nodename>
Finally I started to realise docker-compose deploy exists
That's not a command. Maybe you're thinking of docker stack deploy
in Swarm? I also don't recommend docker-compose for a server. It doesn't have production tooling and features. See my AMA post on all the reasons to use a single node Swarm.
Note that docker-compose the CLI tool for dev and CI/CD is not the same as the docker-compose.yml file, which I'll discuss in a bit.
Furthermore, I think that Docker Cloud / Swarm are for managing multiple fail-over servers and round-robin routing and so on? I don't think I need any of this.
Docker Cloud is shutting down in May 2018, so I wouldn't use that to deploy stacks, but Swarm is great in a single node if you don't need node high-availability.
OK, so for your workflow from local dev to this prod server:
Either manually build your image locally and push to Docker Hub (or other registries) or my preferred, store code in GitHub/Bitbucket and have the image built by Docker Hub on each commit to a specific branch (let's say master
).
Your docker-compose file is also a stack file. The compose documentation has specific sections for "build" (either for CI/CD server or your local machine workflow) and "deploy" (features on Swarm). You should be building locally or via Docker Hub or custom CI server, not in the Swarm itself. Production tools aren't usually meant for image building.
Once your server is built (with docker-machine), you can use your local docker CLI to manage the remote docker engine with docker-machine env <name>
. You would create a single-node Swarm with docker swarm init
and voila, it'll accept compose files (aka stack files). These files are similar but not the same format as old Docker Cloud stacks.
Now you can docker stack deploy -c compose.yml <stackname>
and it'll spin up your services with the envvars you've set, volumes for data, etc.
For updates, you can get zero downtime if you use 17.12 or higher version docker (latest 18.03 is even better), you set update-order: start-first
, and you ensure all services have healthchecks defined so docker truly knows when they are ready for connections.
You can use override yaml files, and docker-compose config
to layer many compose files into a single stack deployment.
For service updates you would just update the compose file and re-do a docker stack deploy
and it'll detect changes.
Be sure you use unique image tags each time so Docker knows which specific SHA to deploy. Don't keep using <imagename>:latest
expecting it to know exactly which image that is.
I hope this helps, and ask more questions in comments and I can update this answer as needed.