I've been using Git for a few years for some projects, but I'm new to Docker.
Today, I would like to find a workflow that allows me to use Git and Docker correctly for my team projects.
Today, without Docker, we use named branches for development. When the functionalities are finalized, we pull them towards "master". When we want to go into production, we create a versioned preprod branch (e.g. preprod-2.3.0) of the master for tests. If we have corrections, we push on the current preprod and merge on master. When the preprod branch is ready (automatic and manual tests), we create a prod branch with the same version as preprod (ex: prod-2.3.0). If we have urgent corrections in prod, we create a new branch from the preprod (ex: preprod-2.3.1), before continuing the normal process (test + prod -> prod-2.3.1).
With Docker, for development, we want to create local images named $PROJECT_NAME/$IMAGE_NAME:dev (project/api:dev, project/db:dev, project/webui:dev...). Every time we rebuild local projects, we lose development images, but otherwise it would become unmanageable. To test, we would also use the versions of dev.
But where I have questions is for the production launch.
Several blogs/articles create docker images after the code is pushed on git, perform unit tests and finally save valid images. Thereafter, a valid image will be named":latest" and used for production implementation. In our case, we could use this system to save valid images of prod-$VERSION branches using the $VERSION and latest tags to version the images.
My problem with this system is that I feel like I'm losing one of Docker's benefits. When I perform my tests locally, I test the code but also the dev image. It is this image that should be used on the CI and in production. While there, the image is recreated several times by the CI for master, preprod and finally prod before being frozen. If the versions of the hub images (e. g. nginx:latest, node:lastest) have changed in the meantime, this can cause problems. See: https://nickjanetakis.com/blog/docker-tip-18-please-pin-your-docker-image-versions
Another solution would be to save the images directly in preprod with the preprod tag. After being tested, I add the tags "prod" and "latest". But if an update occurs during the creation of the preprod, I can sometimes waste time to understand why it worked in dev and not in prod. But at least it avoids problems between pre-production and production.
I also couldn't find a system at the nodejs lock (package.json/package-lock.json) that allows to run npm build/npm ci (download the latest version of the packages and update the lock file specifying which version was precisely used/rebuilt the same architecture as the lock file). See : https://docs.npmjs.com/files/package-lock.json
Do you have a system/idea to ensure that the image is identical to the previous one (as a lock)? Or a workflow that allows you to work in a team while dropping images directly from the dev (with versions)?
I finally made my own lock system with a bash alias or alternatively one or two scripts :
bash alias (Add docker ci command and improve docker build) :
dockeralias() {
args=$@
args_cat=$1
shift
args_without_cat_files_and_final=""
dockerfile="Dockerfile"
while test $# -gt 1; do
case "$1" in
-f|--file)
shift
dockerfile=$1
shift
;;
-f=*)
dockerfile=${1#"-f="}
shift
;;
--file=*)
dockerfile=${1#"--file="}
shift
;;
*)
args_without_cat_files_and_final="$args_without_cat_files_and_final $1 "
shift
;;
esac
done
lockfile="$dockerfile-lock"
args_final=$@
if [ $args_cat == "ci" ]; then
echo "Build from $lockfile"
command docker build $args_without_cat_files_and_final --file $lockfile $args_final
return
fi
if ! command docker $args; then
return
fi
if [ $args_cat == "build" ]; then
echo "Make $lockfile from $dockerfile"
cp $dockerfile $lockfile
grep ^FROM $lockfile | while read -r line ; do
image=`echo $line | cut -d" " -f2`
digest=`command docker inspect --format='{{index .RepoDigests 0}}' $image`
echo "$image > $digest"
sed -i -e "s/$image/$digest/g" $lockfile
done
fi
}
alias docker=dockeralias
Alternatively docker-build.sh to replace docker build in dev
#!/bin/bash
docker build "$@"
dockerfile="Dockerfile"
while test $# -gt 0; do
case "$1" in
-f|--file)
shift
dockerfile=$1
shift
;;
-f=*)
dockerfile=${1#"-f="}
shift
;;
--file=*)
dockerfile=${1#"--file="}
shift
;;
*)
shift
;;
esac
done
lockfile="$dockerfile-lock"
echo "Make $lockfile from $dockerfile"
cp $dockerfile $lockfile
grep ^FROM $lockfile | while read -r line ; do
image=`echo $line | cut -d" " -f2`
digest=`docker inspect --format='{{index .RepoDigests 0}}' $image`
echo "$image > $digest"
sed -i -e "s/$image/$digest/g" $lockfile
done
And docker-ci.sh to replace docker build in CI (preprod, prod...) or just use "docker build --file Dockerfile-lock ."
#!/bin/bash
args=""
dockerfile="Dockerfile"
while test $# -gt 1; do
case "$1" in
-f|--file)
shift
dockerfile=$1
shift
;;
-f=*)
dockerfile=${1#"-f="}
shift
;;
--file=*)
dockerfile=${1#"--file="}
shift
;;
*)
args="$newargs $1 "
shift
;;
esac
done
lockfile="$dockerfile-lock"
echo "Build from $lockfile"
docker build $args --file $lockfile "$@"
Here an example of what do the script :
From Dockerfile
FROM node:latest
EXPOSE 8080
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
CMD npm start
Create Dockerfile-lock
FROM node@sha256:d2180576a96698b0c7f0b00474c48f67a494333d9ecb57c675700395aeeb2c35
EXPOSE 8080
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
CMD npm start
And I also writed a feature request on the docker forum : https://forums.docker.com/t/dockerfile-lock/67031