I am using jenkins to build and deploy my project on a jenkins build node. This project uses the dockerfile maven plugin to build, tag, and push a docker image to nexus. It has recently come to my attention that I have been using up all the disk space on the Jenkins build node, and that it could be because the docker process is caching data locally when my Jenkins process is complete.
How can I either:
docker system prune --volumes --all --force
does not seem viable as I could delete other people's data.
edit: currently I am using a pre-build bash command to remove all my hanging images. Is there any built-in functionality that docker provides or is this the best I am going to get?
value=$(docker images | grep -e "myAppTag" | grep -e "<none>" | tr -s ' ' | cut -d' ' -f3); if [ "$value" != "" ]; then docker rmi $value; fi
I do not think there is any built-in docker functionality to manage storage on an on-going basis. The prune
commands do what you want, but you will need to invoke them manually.
You can safely run a docker system prune
at any time to cleanup unused items - so that is something you could build into a periodic job or indeed into every job. For your specific case, once you have built the image and pushed it to the repo, you do not need it locally again. So you could remove/untag the image and then use a prune
command to clean up the space.
But ... the reason for the layering system and retention of layers is to allow for re-use of layers that do not change. So if you are continually removing/cleaning all image layers then each build will be slow as there are no cache layers to re-use. The cost of slow builds might outweight the costs of the storage. It's all a trade-off.