We are running a DC/OS cluster, and managing it by hand right now because the container instances running in it are low in numbers and don't need much intervention.
Now, we want to do deployments from Jenkins - while that works with the Marathon plugin, we have hit a more-or-less interesting problem: shared volumes.
All our nodes have a NetApp mounted at /srv and the services have Docker container volumes which map certain container paths to subdirectories in /srv. Now, when the Jenkins job causes a redeploy of a service, it will leave the old container running while staging the new version and switching over once the new container reaches "healthy" state.
This is a problem, because the image in question includes MongoDB and MySQL - which break because there are concurrent accesses on the backing database files.
How can I scale the old instance to 0 and only when the old instance is cleanly stopped actually deploy the new instance?
Setting up shared MongoDB/MySQL containers in DC/OS is something I don't really like as it would cause a difference between the containers on developer machines, as well as that the DB content with which the containers are seeded is included in the image...
edit: this problem also regularly bites us when someone accidentally presses "restart service", because unlike the naming suggests, it does not do shutdown-wait-redeploy, but also stage-then-switchover...
You might have a look at
and
Basically, you should use the MARATHON_SINGLE_INSTANCE_APP
label like this
"labels":{
"MARATHON_SINGLE_INSTANCE_APP": "true",
}
and specify the upgradeStrategy
accordingly:
"upgradeStrategy":{
"minimumHealthCapacity": 0,
"maximumOverCapacity": 0
}