Search code examples
dockerbuilddevelopment-environmentdocker-compose

Docker development workflow for compiled components in a docker-compose setup


I'm working on a service in a 'system' orchestrated using docker-compose. The service is written in a compiled language and I need to rebuild it when I make a change. I'm trying to find the best way to quickly iterate on changes.

I've tried 2 'workflows', both rely on being linked to the source directory via a volume: to get the latest source.

A.
  • Bring up all the supporting containers with docker-compose up -d
  • Stop the container for the service under development
  • Run a new container using the image docker-compose run --name SERVICE --rm SERVICE /bin/bash
  • Within that container run compile and run the application at the exposed port.
  • Restart by stopping the running process and then rebuilding.
B.
  • (requires Dockerfile CMD to build and then run the service)
  • Stop the service: docker-compose kill SERVICE
  • Restart the service docker-compose up -d --no-deps SERVICE

The problem is both take too long to restart vs restarting the service locally (running on my laptop independently of docker). This setup seems to be ok with interpreted languages that can hot-reload changed files but I've yet to find a suitably fast system for compiled language services.


Solution

  • I would do this:

    Run docker-compose up but:

    • use a host volume for the directory of the compiled binary instead of the source
    • use an entrypoint that does something like

    entrypoint.sh:

    trap "pkill -f the_binary_name" SIGHUP
    trap "exit" SIGTERM
    
    while [[ 1 ]]; do
      ./the_binary_name;
    done
    

    Write a script to rebuild the binary, and copy it into the volume used by the service in docker-compose.yml:

    # Run a container to compile and build the binary
    docker run -ti -v $SOURCE:/path -v $DEST:/target some_image build_the_binary
    
    # copy it to the host volume directory
    copy $DEST/... /volume/shared/with/running/container
    
    # signal the container
    docker kill -s SIGHUP container_name
    

    So to compile the binary you use this script, which mounts the source and a destination directory as volumes. You could skip the copy step if the $DEST is the same as the volume directory shared with the "run" container. Finally the script will signal the running container to have it kill the old process (which was running the old binary) and start the new one.

    If the shared volume is making compiling in a container too slow, you could also run the compile on the host and just do the copy and signaling to have it run in a container.

    This solution has the added benefit that your "runtime" image doesn't need all the dev dependencies. It could be a very lean image with just a bare OS base.