I have an application which, among other thigns allows you to spin up docker containers with custom running code with the help from an api, which is also dockerised.
So from the API container I do:
exec(`curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" -d \'{"Image": "botimage", "ExposedPorts": {"${PORT}/tcp": {"HostPort": "${PORT}"}}, "HostConfig": {"Binds": ["${dirPath}/bots/api/bots/strategies:/usr/src/bots/strategies", "${dirPath}/bots/api/database:/usr/src/bots/database", "${dirPath}/postgres/data:/usr/src/bots/data"], "NetworkMode": "bitmex_backend"}, "PortBindings": { "${PORT}/tcp": [{ "HostPort": "${PORT}" }] }, "Env": ["TOPIC=${TOPIC}","BOTNAME=${BOT_NAME}","EXEC_ENV=${EXEC_ENV}","BITMEX_KEYS=${BITMEX_KEYS}","TIME_FRAME=${TIME_FRAME}","PORT=${PORT}"], "Cmd": ["node", "app.js"]}\' -X POST http:/v1.24/containers/create?name=${BOT_NAME}`,
(err, stdout, stderr) => {
if (err) {
console.error(err)
return;
}
var id = JSON.parse(stdout).Id;
// Would this work with the name too?
logEvent(LOG_LEVELS.info, RESPONSE_CODES.LOG_MESSAGE_ONLY, `Initializing containerised strategy `)
exec(`curl --unix-socket /var/run/docker.sock -X POST http:/v1.24/containers/${id}/start`, (err, stdout, stderr) => {
if (err) {
console.error(err)
return;
}
});
});
And it spins up a new container, again dynamically.
Im wonderin how I can bring this new container down from another container.
From localhost I can simply do:
curl --unix-socket /var/run/docker.sock -X POST http:/v1.24/containers/test/stop
curl --unix-socket /var/run/docker.sock -X DELETE http:/v1.24/containers/test
How ever when I try to do this from another docker container by:
exec(`curl --unix-socket /var/run/docker.sock -X POST http:/v1.24/containers/test/stop`)
exec(`curl --unix-socket /var/run/docker.sock -X DELETE http:/v1.24/containers/test`)
I get the following error:
{ Error: Command failed: curl --unix-socket /var/run/docker.sock -X POST http:/v1.24/containers/test/stop curl: (7) Couldn't connect to server
at ChildProcess.exithandler (child_process.js:299:12) at ChildProcess.emit (events.js:193:13) at maybeClose (internal/child_process.js:999:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:266:5) killed: false, code: 7,
signal: null, cmd:
I think getting something like this should be addressable with something like external_links done when im building the container.
Any idea on if this is the right call? or how I can tackle the issue? deff. looks like a networking problem.
The only way to let docker containers control other docker containers the way you are describing is to expose the docker socket (/var/run/docker.sock
) to the "controlling" container. You can do that like this:
darkstar:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
<my_image>
darkstar:~$ docker exec -u root -it <container id> /bin/bash
Now, as root inside the container, you can install docker CLI (This is not strictly necessary, depending on how you plan to manipulate docker from inside the container. Also, I am assuming a Debian-like Linux, YMMV):
root@guest:/# apt-get update
root@guest:/# apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
root@guest:/# rel=$(. /etc/os-release; echo "$ID")
root@guest:/# curl -fsSL https://download.docker.com/linux/${rel}/gpg > /tmp/dkey
root@guest:/# apt-key add /tmp/dkey
root@guest:/# add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/${rel} \
$(lsb_release -cs) stable"
root@guest:/# apt-get update
root@guest:/# apt-get -y install docker-ce
I would suggest doing a docker commit
here (from the host, not in the container) to save the state of the image so you don't need to repeat the above steps every time a rebuild happens.
Now the container should have access to the socket:
root@guest:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 my_image "/sbin/tini -- /usr/…" 8 minutes ago ...
Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.