Search code examples
dockerdocker-composedocker-in-docker

How do I re-share volumes between Docker-in-Docker containers?


I have mounted a volume shared to my service main. Now I am trying to mount that same volume to another container client, that is started with docker-compose up client from within the main container (Docker-in-Docker):

version: "3.8"

# set COMPOSE_PROJECT_NAME=default before running `docker-compose up main`

services:
  main:
    image: rbird/docker-compose:python-3.9-slim-buster
    privileged: true
    entrypoint: docker-compose up client # start client
    volumes:
      - //var/run/docker.sock:/var/run/docker.sock
      - ./docker-compose.yml:/docker-compose.yml
      - ./shared:/shared

  client:
    image: alpine
    entrypoint: sh -c "ls shared*"
    profiles:
      - do-not-run-directly
    volumes:
      - /shared:/shared1
      - ./shared:/shared2

The output I get is:

[+] Running 2/2
 - Network test_default   Created                                                                                                                                                                                       0.0s
 - Container test_main_1  Started                                                                                                                                                                                       0.9s
Attaching to main_1
Recreating default_client_1 ... done
Attaching to default_client_1
main_1  | client_1  | shared1:
main_1  | client_1  |
main_1  | client_1  | shared2:
main_1  | default_client_1 exited with code 0
main_1 exited with code 0

So the folders /shared2 and /shared2 are empty, although they contain files in the host directory as well as in the main container.

How do I re-share volumes between containers?

Or is there a way to share a host directory between all containers, even the ones started by one of the containers?


Solution

  • The cleanest answer here is to delete the main: container and the profiles: block for the client: container, and run docker-compose on the host directly.


    The setup you have here uses the host's Docker socket. (It is not "Docker-in-Docker"; that setup generally is the even more confusing case of running a second Docker daemon in a container.) This means that the Docker Compose instance inside the container sends instructions to the host's Docker daemon telling it what containers to start. You're mounting the docker-compose.yml file in the container's root directory, so the ./shared path is interpreted relative to / as well.

    This means the host's Docker daemon is receiving a request to create a container with /shared mounted on /shared1 inside the new container, and also with /shared (./shared, relative to the path /) mounted on /shared2. The host's Docker daemon creates this container using host paths. If you look on your host system, you will probably see an empty /shared directory in the host filesystem root, and if you create files there they will appear in the new container's /shared1 and /shared2 directories.

    In general, there is no way to mount one container's filesystem to another. If you're trying to run docker (or docker-compose) from a container, you have to have external knowledge of which of your own filesystems are volumes mounts and what exactly has been mounted.


    If you can, avoid both the approaches of containers launching other containers and of sharing volumes between containers. If it's possible to launch another container, and that other container can mount arbitrary parts of the host filesystem, then you can pretty trivially root the entire host. In addition to the security concerns, the path complexities you note here are difficult to get around. Sharing volumes doesn't work well in non-Docker environments (in Kubernetes, for example, it's hard to get a ReadWriteMany volume and containers generally won't be on the same host as each other) and there are complexities around permissions and having multiple readers and writers on the same files.

    Instead, launch docker and docker-compose commands on the host only (as a privileged user on a non-developer system). If one container needs one-way publishing of read-only content to another, like static assets, create a custom image COPY --from= one image to the other. Otherwise consider using purpose-built network-accessible storage (like a database) that doesn't specifically depend on a filesystem and knows how to handle concurrent access.