I am using Docker version 17.12.1-ce.
I have set up a swarm with two nodes, and I have a stack running on the manager, while I am to instantiate new nodes on the worker (not within a service, but as stand-alone containers).
So far I have been unable to find a way to instantiate containers on the worker specifically, and/or to verify that the new container actually got deployed on the worker.
I have read the answer to this question which led me to run containers with the -e
option specifying constraint:Role==worker
, constraint:node==<nodeId>
or constraint:<custom label>==<value>
, and this github issue from 2016 showing the docker info
command outputting just the information I would need (i.e. how many containers are on each node at any given time), however I am not sure if this is a feature of the stand-alone swarm, since docker info
only the number of nodes, but no detailed info for each node. I have also tried with docker -D info
.
Specifically, I need to:
Swarm commands will only care/show service-related containers. If you create one with docker run
, then you'll need to use something like ssh node2 docker ps
to see all containers on that node.
I recommend you do your best in a Swarm to have all containers as part of a service. If you need a container to run on nodeX, then you can create a service with a "node constraint" using labels and constraints. In this case you could restrict the single replica of that service to a node's hostname.
docker service create --constraint Node.Hostname==swarm2 nginx
To see all tasks on a node from any swarm manager:
docker node ps <nodename_or_id>