Here is my setup:
docker volume inspect pgdata
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "local-persist",
"Labels": {},
"Mountpoint": "/mnt/pgdata",
"Name": "pgdata",
"Options": {
"mountpoint": "/mnt/pgdata"
},
"Scope": "local"
}
]
2 disks/partitions:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 856K 13G 1% /run
/dev/sda1 939G 10G 889G 2% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
s3fs 16E 0 16E 0% /codede
tmpfs 13G 0 13G 0% /run/user/1000
/dev/sdb1 46T 24K 44T 1% /mnt/pgdata
As you can see the partition mounted at /mnt/pgdata has 44TB storage and I want all the data stored in the Database to be stored there as it will get huge really fast.
But when I start to add data to the database, the 1TB disk will get filled subsequently and my system will fail/stop.
database:
image: DB_IMAGE
volumes:
- pgdata:/var/lib/postgresql/data
command: postgres -c 'max_connections=500'
ports:
- "6543:5432"
secrets:
- postgres-user
- postgres-password
environment:
POSTGRES_USER_FILE: /run/configs/postgres-user
POSTGRES_PASSWORD_FILE: /run/secrets/postgres-password
POSTGRES_DB: ts
deploy:
placement:
constraints: [node.role == manager]
The workers are running on the nodes, mapserver and Rest-API are expected to have really small influence on the harddisk. So the only service running affecting the storage should be the database.
Why would it still fill the smaller diskspace instead of only the big one?
EDIT: Apparently my container still takes a huuuuge amount of space on the disk. The container is the ts_database container.
EDIT2: So i found out why it is that big: logging. My ID-json.log file takes up the 630GB, after checking the docs https://docs.docker.com/config/containers/logging/json-file/ I hope I can resolve the issue.
Did you describe volume inside docker-compose.yml
?
It is should be described even if already created externally:
version: "3.9"
# service description
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
# volume description
volumes:
myapp:
external: true
Or
volumes:
pgdata:
external: true
In your case
So apparently inside my /var/lib/docker-directory there is a subfolder with 125GB of disk space usage
/var/lib/docker/volumes
is a default path for docker volumes.
Looks like your container using different pgdata
volume. Maybe you not correctly described it (see my answer above). Here is hard to guess what happened according provided information.
If volume is really correctly described and still don't working I may recommend two workarounds:
Use direct mount without volumes:
volumes:
- /mnt/pgdata:/var/lib/postgresql/data
Mount second drive to /var/lib/docker
to store all docker data at big disk by default:
# stop docker
service docker stop
# unmount current folder
umount /mnt/pgdata
# rename old docker folder
mv /var/lib/docker /var/lib/docker_old
# create new folder
mkdir /var/lib/docker
# mount
mount /dev/sdb1 /var/lib/docker
# copy docker files to new position
cp -pr /var/lib/docker_old/* /var/lib/docker/
# check for hidden files starting from `.` if exists and copy it too
ls -la /var/lib/docker_old/
# remove old folder
rm -fr /var/lib/docker_old
# mount /dev/sdb1 to /var/lib/docker as permanent storage using /etc/fstab file
# start docoker and enjoy
service docker start