Here's my setup, this output was taken from docker-machine ls
. Using docker machine to provision the swarm.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
cluster-master * (swarm) digitalocean Running tcp://REDACTED:2376 cluster-master (master) v1.11.1
kv-store - digitalocean Running tcp://REDACTED:2376 v1.11.1
node-1 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
node-2 - digitalocean Running tcp://REDACTED:2376 cluster-master v1.11.1
Right now I'm searching for a way to setup my CI/CD workflow. Here is my initial idea:
Questions:
The first part of the process all looks fine. Where it gets complicated is managing the deployed production containers.
Is it okay to run your testing on docker hub or should I rely on another service?
Yes it should be fine to run tests on docker hub assuming you don't need further integration tests.
I need to integrate my containers with amazon services and have a fairly non-standard deployment so this part of the testing has to be done on an amazon instance.
My main problem is pushing the changes to the docker swarm. Should I setup my docker-swarm on a remote machine and host the application there?
If you're just using one machine you don't need the added overhead of using swarm. If you're planning to scale to a larger multi-node deployment, yes deploy to a remote machine because you'll discover sooner the gotchas around using swarm.
You need to think about how you retire old versions and bring in the latest version of your containers to the swarm which is often called scheduling.
One simple approach that can be used is:
This is done in docker swarm by declaring a service. Then updating the image which can be watched as a task. For more information on the detail of this process see Apply rolling update to swarm and for how to do this in Amazon updating docker containers in ecs