Search code examples
dockerkubernetesenvironment-variablescontainers

How to use Docker container variables inside Kubernetes pod


I have a Flask web application running as a Docker image that is deployed to a Kubernetes pod running on GKE. There are a few environment variables necessary for the application which are included in the docker-compose.yaml like so:

...
services:
  my-app:
    build: 
      ...
    environment:
      VAR_1: foo
      VAR_2: bar
...

I want to keep these environment variables in the docker-compose.yaml so I can run the application locally if necessary. However, when I go to deploy this using a Kubernetes deployment, these variables are missing from the pod and it throws an error. The only way I have found to resolve this is to add the following to my deployment.yaml:

containers:
      - name: my-app
        ...
        env:
          - name: VAR_1
            value: foo
          - name: VAR_2
            value: bar
...

Is there a way to migrate the values of these environment variables directly from the Docker container image into the Kubernetes pod?

I have tried researching this in Kubernetes and Docker documentation and Google searching and the only solutions I can find say to just include the environment variables in the deployment.yaml, but I'd like to retain them in the docker-compose.yaml for the purposes of running the container locally. I couldn't find anything that explained how Docker container environment variables and Kubernetes environment variables interacted.


Solution

  • Let us assume docker-compose file and kubernetes runs the same way, Both take a ready to use image and schedule a new pod or container based on it.

    By default this image accept a set of env variables, to send those variables: docker-compose manage them in a way and kubernetes in an another way. (a matter of syntax)

    So you can use the same image over compose and over kubernetes, but the syntax of sending the env variables will differ.

    If you want them to presist no matter of the deployment and tool, you can always hardcode those env variables in the image itself, in another term, in your dockerfile that you used to build the image.

    I dont recommend this way ofc, and it might not work for you in case you are using pre-built official images, but the below is an example of a dockerfile with env included.

    FROM alpine:latest
    
    # this is how you hardcode it
    ENV VAR_1 foo
    
    COPY helloworld.sh .
    
    RUN chmod +x /helloworld.sh
    
    CMD ["/helloworld.sh"]
    

    If you want to move toward managing this in a much better way, you can use an .env file in your docker-compose to be able to update all the variables, especially when your compose have several apps that share the same variables.

      app1:
        image: ACRHOST/app1:latest
        env_file:
          - .env
    

    And on kubernetes side, you can create a config map, link your pods to that configmap and then you can update the value of the configmap only.

    https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

    kubectl create configmap <map-name> <data-source>
    

    Also note that you can set the values in your configmap directly from the .env file that you use in docker, check the link above.