Search code examples
pythondockerdocker-composecontinuous-integrationmicroservices

Multi-repository docker-compose


I have two services, on two different GitLab repositories, deployed to the same host. I am currently using supervisord to run all of the services. The CI/CD for each repository pushes the code to the host.

I am trying to replace supervisord with Docker. What I did was the following:

  1. Set up a Dockerfile for each service.
  2. Created a third repository with only a docker-compose.yml, that runs docker-compose up in its CI to build and run the two services. I expect this repository to only be deployed once.

I am looking for a way to have the docker-compose automatically update when I deploy one of the two services.

Edit: Essentially, I am trying to figure out the best way to use docker-compose with a multi repository setup and one host.

My docker-compose:

version: "3.4"
services:
    redis:
        image: "redis:alpine"
    api:
        build: .
        command: gunicorn -c gunicorn_conf.py --bind 0.0.0.0:5000 --chdir server "app:app" --timeout 120
        volumes:
            - .:/app
        ports:
            - "8000:8000"
        depends_on:
            - redis
    celery-worker:
        build: .
        command: celery worker -A server.celery_config:celery
        volumes:
            - .:/app
        depends_on:
            - redis
    celery-beat:
        build: .
        command: celery beat -A server.celery_config:celery --loglevel=INFO
        volumes:
            - .:/app
        depends_on:
            - redis
    other-service:
        build: .
        command: python other-service.py
        volumes:
            - .:/other-service
        depends_on:
            - redis

Solution

  • If you're setting this up in the context of a CI system, the docker-compose.yml file should just run the images; it shouldn't also take responsibility for building them.

    Do not overwrite the code in a container using volumes:.

    You mention each service's repository has a Dockerfile, which is a normal setup. Your CI system should run docker build there (and typically docker push). Then your docker-compose.yml file just needs to mention the image: that the CI system builds:

    version: "3.4"
    services:
        redis:
            image: "redis:alpine"
        api:
            image: "me/django:${DJANGO_VERSION:-latest}"
            ports:
                - "8000:8000"
            depends_on:
                - redis
        celery-worker:
            image: "me/django:${DJANGO_VERSION:-latest}"
            command: celery worker -A server.celery_config:celery
            depends_on:
                - redis
    

    I hint at docker push above. If you're using Docker Hub, or a cloud-hosted Docker image repository, or are running a private repository, the CI system should run docker push after it builds each image, and (if it's not Docker Hub) the image: lines need to include the repository address.

    The other important question here is what to do on rebuilds. I'd recommend giving each build a unique Docker image tag, a timestamp or a source control commit ID both work well. In the docker-compose.yml file I show above, I use an environment variable to specify the actual image tag, so your CI system can run

    DJANGO_VERSION=20200113.1114 docker-compose up -d
    

    Then Compose will know about the changed image tag, and will be able to recreate the containers based on the new images.

    (This approach is highly relevant in the context of cluster systems like Kubernetes. Pushing images to a registry is all but required there. In Kubernetes changing the name of an image: triggers a redeployment, so it's also all but required to use a unique image tag per build. Except that there are multiple and more complex YAML files, the overall approach in Kubernetes would be very similar to what I've laid out here.)