We have several "Release jobs" in Jenkins that build
and push
a Docker image of the application to a docker registry, update the project version in various files and finally push the release tag to the corresponding Git repository.
This whole process, runs inside an isolated Docker-in-Docker container, which means that the Docker cache is completely blank each time these Jenkins jobs are being executed.
In short: Jenkins instance --> Starts a DinD container --> The Git repo is being cloned inside the DinD container --> Dockerfiles with several layers including the actual build process of the applications are being built --> Push docker images to registry --> Push release to Git.
While on one hand this isolation helps to avoid some problems, on the other hand it makes the whole docker build process particularly slow.
Docker pull and docker push processes surely contribute to this delay up to a degree, but this is a network speed issue that we cannot deal with atm.
However another reason for this tardiness is that, since the actual application (maven or angular) is being built inside a "clean" docker container where the .m2 or node_modules directories are empty every time, all dependencies must be downloaded/installed upon each run. We can obviously mount an .m2 repository from Jenkins inside the DinD container, but the images which are being build inside this DinD container will have no access to it.
We tried to tar
.m2 and node_modules directories, COPY
them inside the image through the Dockerfile, untar
them and move
them to the right path, but this workaround saved as 1-2 minutes tops.
We also tried to cache Maven dependencies using buildkit
, e.g. https://www.baeldung.com/ops/docker-cache-maven-dependencies#caching-using-buildkit but it's obviously not exactly what we need.
AFAIK it's not possible to mount volumes upon docker build
, which would be the ideal solution in our "blank cache" situation.
Has anyone met a similar problem and found a workaround to it?
In general, we would appreciate any suggestions on how to minimize the execution time of our release jobs, and optimize the whole process.
Thank you in advance.
Like SiHa mentioned in the comments, you can have an image with prebuilt dependencies inside. This is how I might do it:
For example, take the lightweight node:16-alpine
image (for angular
part of your project), clone the code from github (use depth=1
to speed it up), run npm install
and push the image to your repo. You don't have to change this image until you update your dependencies.
In your day-to-day builds, download this image and clone the github code and only run npm run build
, since node_modules
folder is already inside the image.
Use FROM
to copy the built code to your final image.
Using lightweight images and not cloning the whole repository should speed it up quite a bit, and using images with prebuilt dependencies might be faster than copying / extracting dependencies inside an image - depends on hardware and network speeds.