Search code examples
c++dockergitlabgitlab-ci-runnerdocker-in-docker

Where should c++ application be compiled in GitLab CI Docker workflow?


I’m looking to understand how to properly structure my .gitlab-ci.yml and Dockerfile such that I can build a C++ application into a Docker container.

I’m struggling with where the actual compilation and link of the C++ application should take place within the CI workflow.

What I’ve done:

  • My current in approach is to use Docker in Docker with a private gitlab docker registry.
  • My gitlab-ci.yml uses a dind docker image service I created based on the the docker:19.03.1-dind image but includes my certificates to talk securely to my private gitlab docker registry.
  • I also have a custom base image referenced by my gitlab-ci.yml based on docker:19.03.1 that includes what I need for building, eg cmake, build-base mariadb-dev, etc.
  • Have my build script added to the gitlab-ci.yml to build the application, cmake … && cmake --build . The dockerfile then copies the final binary produced in my build step.

Having done all of this it doesn’t feel quite right to me and I’m wondering if I’m missing the intent. I’ve tried to find a C++ example online to follow as example but have been unsuccessful.

What I’m not fully understanding is the role of each player in the docker-in-docker setup: docker image, dind image, and finally the container I’m producing…

What I’d like to know…

  • Who should perform the build and contain the build environment, the base image specified in my .gitlab-ci.yml or my Dockerfile?
  • If I build with the dockerfile, how to i get the contents of the source into the docker container? Do I copy the /builds dir? Should I mount it?
  • Where to divide who performs work, gitlab-ci.yml or Docker file?
  • Reference to a working example of a C++ docker application built with Docker-in-Docker Gitlab CI.

.gitlab-ci.yml

image: $CI_REGISTRY/building-blocks/dev-mysql-cpp:latest
#image: docker:19.03.1

services:
  - name: $CI_REGISTRY/building-blocks/my-dind:latest
    alias: docker

stages:
  - build
  - release

variables:
  # Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
  DOCKER_TLS_CERTDIR: "/certs"
  CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest

before_script:
        - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

build:
  stage: build
  script:
    - mkdir build

Solution

  • Both approaches are equally valid. If you look at other SO questions, one thing you'll probably notice is that Java/Docker images almost universally build a jar file on their host and then COPY it into an image, but Go/Docker images tend to use a multi-stage Dockerfile starting from sources.

    If you already have a fairly mature build system and your developers already have a very consistent setup, it makes sense to do more work in the CI environment (in your .gitlab.yml file). Build your application the same way you already do, then COPY it into a minimal Docker image. This approach is also helpful if you need to ship both Docker and non-Docker artifacts. If you have a make dist style tar file and want to get a Docker image out of it, you could use a very straightforward Dockerfile like

    FROM ubuntu
    RUN apt-get update && apt-get install ...
    ADD dist/myapp.tar.gz /usr/local  # unpacking it
    EXPOSE 12345
    CMD ["myapp"]                     # /usr/local/bin/myapp
    

    On the other hand, if your developers have a variety of desktop environments and you're really trying to standardize things, and you only need to ship the Docker image, it could make sense to centralize most things in the Dockerfile. This would have the advantage that every developer could run the exact build sequence themselves locally, rather than depending on the CI system to try simple changes. Something built around GNU Autoconf might look more like

    FROM ubuntu AS build
    RUN apt-get update \
     && apt-get install --no-install-recommends --assume-yes \
          build-essential \
          lib...-dev
    WORKDIR /app
    COPY . .
    RUN ./configure --prefix=/usr/local \
     && make \
     && make install
    
    FROM ubuntu
    RUN apt-get update \
     && apt-get install --no-install-recommends --assume-yes \
          lib...
    COPY --from=build /usr/local /usr/local
    CMD ["myapp"]
    

    If you do the primary build in a Dockerfile, you need to COPY the source code in. Volume mounts don't work at this point in the sequence. CI systems should avoid bind-mounting source code into a container in any case: you want to run tests against the actual artifact you've built, and not a hybrid of a built Docker image but with all of its source code replaced.