I am trying to use docker for development by mounting a folder from the container to host, as the standard approach of host-to-container doesn't work well for a certain project I am working upon.
Currently, I do that using bindfs
(which also maps the user permission) as suggested in this issue:
pid=$( docker inspect -f '{{.State.Pid}}' "$container")
root=/proc/$pid/root
sudo bindfs --map=1000/"$(id -u)" "$root$source" "$target"
However, taking rootfs
from proc
seems very fragile as it depends on the pid
of the process. Is there an alternative way to do this?
If there is a way of finding the rootfs
-- regardless of the storage-driver used, I could use that in bindfs
instead. Where is the rootfs of container in host machine after docker 1.6.0 says it could vary according to the storage-driver used, but doesn't say how to get it.
I am really afraid to use a solution that relies on a specific storage-driver, due to performance reasons. I am also wondering if this is even possible because it is a "union filesystem" - so will there be a single "static" rootfs
at all?
If I understand correctly you don't necessarily want to access the whole filesystem of the container but rather only relevant directories containing the application.
If your main intend is to allow shipping your run-time environment as a single bundled container image but allowing your users to access and modify application files then using an ordinary bind-volume and copying the files on startup would by the easiest way in my opinion, i.e.
docker run -v $PWD:/app-data/ your_app
This will bind-mount your current directory as (an empty) /app-data/
into the container. Then in the container you need to copy the application files into that directory (if not already present):
#!/bin/bash
# script /docker-entrypoint.sh
# test if volume is already initialized
# e.g. see if src directory exists
if ! [ -d /app-data/src ]; then
# copy all files from shipped /app-dist/ to the actual run-time location
cp -a /app-dist/* /app-data/
fi
# continue executing the command
exec "$@"
This will copy the shipped application files when not already present into the mounted volume on the host where the user can access and edit them. If you always want to use the latest files from the current image you can just cp
them unconditionally. The required files need to be put into /app-dist/
in the Dockerfile
The benefit of this approach is that it is very easy to support since it uses ordinary volumes. The drawback is of course the increased startup time since all the files have to be copied first.
The next best approach would be to use unnamed volumes and bind-mount them to an accessible path:
# start container with volume
docker run -d -v /app-data/ --name your_app_container your_app
# get underlying volume path of /app-data on the host
VOLUME_HOST_PATH="$(docker inspect -f '{{range .Mounts}}{{if eq .Destination "/app-data"}}{{.Source}}{{end}}{{end}}' your_app_container)"
# bind-mount the volume path to a user-accessible path
sudo mount --bind "$VOLUME_HOST_PATH" "$target"
Instead of starting the container with an explicit -v
you can also use a VOLUME
in the Dockerfile
.
The advantage of this approach is that docker
will do the copying for you when initializing the volume - so no need for a custom entrypoint - and you won't depend on the $pid
of the container as in your solution.
But the drawback of increased startup time stays as the files still need to be copied. Also this might not reliably work for all storage drivers.
Lastly your own solution by bind-mounting the containers /proc/$pid/root/
should work with all storage drivers since /proc/$pid/root/
gives you access to the whole filesystem as seen by the container, i.e. with all additional (bind-)mounts and volumes within its namespace.
In any case using bindfs
should not be necessary when sharing the volumes between the docker
host and the actual macOS / Windows since the mapping of access permission is done automatically between the different operating systems.
At the same time this may prohibit the latter solutions since the bind-mounting will only work inside the Linux VM used as the docker host under the hood in this kind of setups which will not translate to a mapped path on the macOS/Windows host.
Another approach that just crossed my mind: exposing filesystem access via the network.
You could add another service providing file access via a network protocol such as FTP, SFTP or SMB - or integrate in a existing service. This would eliminate unnecessary copying of data and will work with all setups and storage drivers since all it needs is to expose a network port.
The "downside" of this is that this will not (automatically) map the volumes into the local filesystem of the host. This may or may not be a problem for your use case.