It seems to be a misunderstood point from me about volumes. I have a docker-compose file with two services : jobs
which is a Flask api built from a Dockerfile (see below), and mongo
which is from official MongoDb image.
I have two volumes : - .:/code
is linked from my host working directory to /code
folder in the container, and a named volume mongodata
.
version: "3"
services:
jobs:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_APP: ${FLASK_APP}
depends_on:
- mongo
mongo:
image: "mongo:3.6.21-xenial"
restart: "always"
ports:
- "27017:27017"
volumes:
- mongodata:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
mongodata:
Dockerfile for jobs
service :
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=job-checker
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run", "--host=0.0.0.0"]
Every time I remove these container and re-run, everything is fine, I still have my data in mongodata
volume. But when I check the volume list I can see that a new volume is created from - .:/code
with a long volume name, for example :
$ docker volume ls
DRIVER VOLUME NAME
local 55c08cd008a1ed1af8345cef01247cbbb29a0fca9385f78859607c2a751a0053
local abe9fd0c415ccf7bf8c77346f31c146e0c1feeac58b3e0e242488a155f6a3927
local job-checker_mongodata
Here I ran docker-compose up
, then I removed containers, then ran up again, so I have two volumes from my working folder.
Is this normal that every up create a new volume instead of using the previous one ?
Thanks
Hidden at the end of the Docker Hub mongo
image documentation is a note:
This image also defines a volume for
/data/configdb
...
The image's Dockerfile in turn contains the line
VOLUME /data/db /data/configdb
When you start the container, you mount your own volume over /data/db
, but you don't mount anything on the second path. This causes Docker to create an anonymous volume there, which is the volume you're seeing with only a long hex ID.
It should be safe to remove the extra volumes, especially if you're sure they're not attached to a container and they don't have interesting content.
This behavior has nothing to do with the bind mount in the other container; bind mounts never show up in the docker volume ls
listing at all.