Search code examples
dockerpodman

Best practice for spinning up container-based (development) environments


OCI containers are a convenient way to package suitable toolchain for a project so that the development environments are consistent and new project members can start quickly by simply checking out the project and pulling the relevant containers.

  • Of course I am not talking about projects that simply need a C++ compiler or Node.JS. I am talking about projects that need specific compiler packages that don't work with newer than Fedora 22, projects with special tools that need to be installed manually into strange places, working on multiple projects that have tools that are not co-installable and such. For this kind of things it is easier to have a container than follow twenty installation steps and then pray the bits left from previous project don't break things for you.

However, starting a container with compiler to build a project requires quite a few options on the docker (or podman) command-line. Besides the image name, usually:

  • mount of the project working directory
  • user id (because the container should access the mounted files as the user running it)
  • if the tool needs access to some network resources, it might also need
    • some credentials, via environment or otherwise
    • ssh agent socket (mount and environment variable)
  • if the build process involves building docker containers
    • docker socket (mount); buildah may work without special setup though
  • and if is a graphic tool (e.g. IDE)
    • X socket mount and environment variable
    • --ipc host to make shared memory work

And then it can get more complicated by other factors. E.g. if the developers are in different departments and don't have access to the same docker repository, their images may be called differently, because docker does not support symbolic names of repositories (podman does though).

Is there some standard(ish) way to handle these options or is everybody just using ad-hoc wrapper scripts?


Solution

  • I use Visual Studio Code Remote - Containers extension to connect the source code to a Docker container that holds all the tools needed to build the code (e.g npm modules, ruby gems, eslint, Node.JS, java). The container contains all the "tools" used to develop/build/test the source code.

    Additionally, you can also put the VSCode extensions into the Docker image to help keep VSCode IDE tools portable as well. https://code.visualstudio.com/docs/remote/containers#_managing-extensions

    You can provide a Dockerfile in the source code for newcomers to build the Docker image themselves or attach VSCode to an existing Docker container.

    If you need to run a server inside the Docker container for testing purposes, you can expose a port on the container via VSCode, and start hitting the server inside the container with a browser or cURL from the host machine.

    Be aware of the known limitations to Visual Studio Code Remote - Containers extension. The one that impacts me the most is the beta support for Alphine Linux. I have often noticed some of the popular Docker Hub images are based on Alphine.