Search code examples
dockercontainers

docker container production vs. testing environment


I am looking for best practice suggestions on how to deploy a container from a testing to a production environment when the container needs different configuration files between the 2 environments. For example, in production it will connect to customer environments while in testing it will connect to internal VMs. One way to solve this: I could store both configuration files in the container and then pass an environment var to decide which files to use in testing vs production when I deploy it. Can that be done? Any other and better suggestions on how to solve this problem? Thanks!


Solution

  • There are two straightforward approaches to this:

    1. Provide the actual configuration via environment variables (docker run -e option, Compose environment:, Kubernetes env:). This not a "test or production" option but actual settings, like host names and credentials.

    2. Inject the complete configuration via a mount of some sort (docker run -v option, Compose per-service volumes:, Kubernetes volumes: in combination with a ConfigMap).

    The exact mechanisms for doing this depend on your application framework and how much of this is preconfigured for you. If you can already read a configuration file then the injected-file approach might match up better with the code you already have.

    Avoid "baking in" a specific configuration to an image. This will force you to rebuild and redeploy the application if parts of this configuration change, when you really just want to change the config settings. This also means you will need to commit a file, get a code review, and redeploy if you ever want to deploy to a different environment.

    As a specific example, imagine that your application depends on a PostgreSQL database. In a test setup you'll want to provision that as a related container, but in production you might use a hosted database system like Amazon's Relational Database Service (RDS). If the database host name is controlled by the standard $PGHOST environment variable then this is easy to set up:

    # docker-compose.prod.yml
    version: '3.8'
    services:
      app:
        image: registry.example.com/app
        environment:
          PGHOST: mydb.123456789012.us-east-1.rds.amazonaws.com
    
    # docker-compose.test.yml
    version: '3.8'
    services:
      app:
        image: registry.example.com/app
        environment:
          PGHOST: db
      db:
        image: postgres:14
    

    Now if you decide you want to set up a production-like pre-production environment, also backed by RDS, it's simple enough to deploy the identical image to that environment but with a different PGHOST value; you don't need to change anything in the image to support the new environment.