Search code examples
kuberneteskubernetes-helmconfigmap

why are variables not taken from configmap when installing helm chart?


I create my helm chart manually

├── Chart.yaml
├── Dockerfile
├── locustfile.py
├── requirements.in
├── requirements.txt
├── templates
│   ├── ConfigMap.yaml
│   └── deployment.yaml
└── values.yaml

I install it with the following command

helm upgrade --install loadgenerator . 

the configmap is not applied and therefore the pods do not start in the description of the pod such an error

Events:
  Type     Reason         Age                    From               Message
  ----     ------         ----                   ----               -------
  Normal   Scheduled      47m                    default-scheduler  Successfully assigned default/loadgenerator-66c46bd489-5rm8v to minikube
  Warning  Failed         45m (x12 over 47m)     kubelet            Error: InvalidImageName
  Warning  InspectFailed  2m17s (x214 over 47m)  kubelet            Failed to apply default image tag "$(image_init_container)": couldn't parse image reference "$(image_init_container)": invalid reference format

configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: app
data:
  image_init_container: "busybox:latest"
  image_app: "loadgenerator"
  namespace: default
  env: prod

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: loadgenerator  
spec:
  selector:
    matchLabels:
      app: loadgenerator
  replicas: 1
  template:
    metadata:
      labels:
        app: loadgenerator        
      annotations:
        sidecar.istio.io/rewriteAppHTTPProbers: "true"
    spec:
      serviceAccountName: default
      terminationGracePeriodSeconds: 5
      restartPolicy: Always
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      initContainers:
      - command:
        - /bin/sh
        - -exc
        - |
          echo "Init container pinging frontend: ${FRONTEND_ADDR}..."
          STATUSCODE=$(wget --server-response http://${FRONTEND_ADDR} 2>&1 | awk '/^  HTTP/{print $2}')
          if test $STATUSCODE -ne 200; then
              echo "Error: Could not reach frontend - Status code: ${STATUSCODE}"
              exit 1
          fi
        name: frontend-check
        image: $(image_init_container)
        envFrom:
          - configMapRef:
              name: app
        env:
        - name: FRONTEND_ADDR
          value: "frontend:80"
      containers:
      - name: main
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - all
          privileged: false
          readOnlyRootFilesystem: true
        image: $(image_app)
        envFrom:
          - configMapRef:
              name: app
        env:
        - name: FRONTEND_ADDR
          value: "frontend:80"
        - name: USERS
          value: "10"
        resources:
          requests:
            cpu: 300m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 512Mi

my task is to write a help chart while the environment variables should be in configmap. why is configmap not applied? how to create it correctly and where do I make a mistake?


Solution

  • The problem isn't with your ConfigMap not getting injected in the Deployment. As the error mentions, you chart doesn't understand the $(image_init_container) value of image because rendered manifest is consuming it literally. Same would be the case when it reaches to $(image_app).

    Helm chart renders the templates into actual Kubernetes manifests and passes them to the API server. Once executed, the resultant Pods of the Deployment will contain the environment variables as defined in the ConfigMap. You need to specify the image before the rendering step, not after the chart gets executed by the api-server.

    If you want to dynamically set the image name, then define variables in values.yml and reference them in Deployment template. Here's simple example:

    # values.yml
    image:
      name: loadgenerator
      tag: latest
    

     # deployment.yml
     containers:
      - name: main
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - all
          privileged: false
          readOnlyRootFilesystem: true
        image: {{ .Values.image.name }}:{{ .Values.image.tag }}