Search code examples
dockercontainersdockerfiledocker-swarm

How to keep container state in docker swarm


This question is a bit conceptual...

I start a service,

Docker runs container(s) on node(s) for this service,

I progress on this container(s),

As some point container(s) gets an exception an enters in an unrecovarable state...

At this point, I am not able to manage that one container or containers manually (to recover it, stop - start for instance) since Swarm is the manager of containers.

What is the best practice of keeping the state of containers? There is "docker container commit" for instance however, am i supposed to find on which node containers are started, find their container ids and commit them manually? Should I define cron jobs for this purpose. Otherwise, shouldn't I rely on Docker for such applications?

Thanks in advance.


Solution

  • Like Oliver suggests, any persistent data you have should be stored in docker volumes, maybe using a docker volume driver from the Store like REX-Ray.

    You should have three general goals in your setup:

    1. The container itself is ephemeral. It can be destroyed and recreated by Swarm when it crashes and nothing is lost.
    2. Any files that are uniquely modified while the container is running are stored in docker volumes so they can be properly attached to the new container when Swarm re-creates it. If it's a multi-node Swarm you'll need shared storage and something like REX-Ray or Cloudstor drivers to reconnect the volumes to wherever the container is recreated after failure.
    3. Your Swarm services need healthchecks. That's how Swarm will know if they are in a bad state and need to be killed and replaced with new containers.

    When you combine those three principles, it'll let Swarm solve your state and uptimes issues.