Search code examples
dockerkuberneteskubernetes-helmkibana

Image name not resolving properly during Helm Upgrade/Install for Elastic Kibana: InvalidImageName error


I am attempting to deploy Kibana to my Amazon EKS cluster via Jenkins and am encountering the error InvalidImageName and can't seem to figure out why the image name isn't resolving properly.

Inside my Jenkinsfile I believe i'm providing everything needed to the Helm Upgrade command so that it points to my private repository (Sonatype Nexus Repository). I am using a local copy of the Helm chart that exists in my project and I got it from the following URL: https://helm.elastic.co/helm/kibana/kibana-8.5.1.tgz

What I am noticing is that the image is being returned as map[registry:abc.xyz.com repository:bitnami/kibana tag:8-debian-12]:8.5.1 and I am unsure why the left hand side is an object/map? The right hand side is the default value for the image tag found in the values.yaml file of the Kibana Helm chart instead of the value I passed as an argument.

ElasticSearch doesn't seem to be giving me an issue and its deployed using the same loop so i'm not sure why Kibana is behaving differently.

When I look at the image within Nexus Repository it gives me the following docker command

docker pull bitnami/kibana:8-debian-12

The stage within Jenkins that performs this work has the following in it:

def helmCharts = [
    [image_repository:'bitnami/elasticsearch', image_tag:'8-debian-12', helm_release_name:'elasticsearch', helm_chart_directory:'charts/bitnami/elasticsearch',namespace:'logging'],
    [image_repository:'bitnami/kibana', image_tag:'8-debian-12', helm_release_name:'kibana', helm_chart_directory:'charts/bitnami/kibana', namespace:'logging'],
    // [image_repository:'bitnami/fluentd', image_tag:'', helm_release_name:'fluentd', helm_chart_directory:'charts/bitnami/fluentd'],
]

helmCharts.each { chart ->
    // Define the Helm command
    def helmCommand = """
        helm upgrade $chart.helm_release_name /workspace/$chart.helm_chart_directory \\
        --install \\
        --namespace $chart.namespace \\
        --create-namespace \\
        --cleanup-on-fail \\
        --timeout 2m0s \\
        --set image.registry=${DOCKER_REGISTRY} \\
        --set image.repository=$chart.image_repository \\
        --set image.tag=$chart.image_tag \\
        --set global.imagePullSecrets[0].name=${params.NEXUS_IMAGE_PULL_SECRET} \\
        --set global.defaultStorageClass=gp2 \\
        --set global.security.allowInsecureImages=true \\
        --kubeconfig /workspace/kubeconfig \\
        --debug
    """
    // Run Helm commands using Docker
    sh """
        docker run --rm \\
            -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \\
            -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \\
            -e AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION} \\
            -e HTTP_PROXY=http://${PROXY_USER}:${PROXY_PASS}@${PROXY_SERVER} \\
            -e HTTPS_PROXY=http://${PROXY_USER}:${PROXY_PASS}@${PROXY_SERVER} \\
            -e http_proxy=http://${PROXY_USER}:${PROXY_PASS}@${PROXY_SERVER} \\
            -e https_proxy=http://${PROXY_USER}:${PROXY_PASS}@${PROXY_SERVER} \\
            -v ${JENKINS_WORKSPACE}:/workspace \\
            ${HELM_AWS_CLI_IMAGE} sh -c '${helmCommand}'
    """
}

The following is the output when looking at the pod that is giving me issues:

PS C:\Users\******> kubectl get pods -n logging
NAME                              READY   STATUS             RESTARTS   AGE
elasticsearch-master-0            0/1     Pending            0          2m55s
pre-install-kibana-kibana-jkj7h   0/1     InvalidImageName   0          2m51s
PS C:\Users\******> kubectl describe pod pre-install-kibana-kibana-jkj7h -n logging
Name:             pre-install-kibana-kibana-jkj7h
Namespace:        logging
Priority:         0
Service Account:  pre-install-kibana-kibana
Node:             ip-**-***-***-***.***-***-west-1.compute.internal/**.***.**.***
Start Time:       Mon, 24 Feb 2025 13:33:59 -0600
Labels:           batch.kubernetes.io/controller-uid=15cea76c-4fa1-4a12-b44b-0f81130a1b64
                  batch.kubernetes.io/job-name=pre-install-kibana-kibana
                  controller-uid=15cea76c-4fa1-4a12-b44b-0f81130a1b64
                  job-name=pre-install-kibana-kibana
Annotations:      <none>
Status:           Pending
IP:               **.***.**.***
IPs:
  IP:           **.***.**.***
Controlled By:  Job/pre-install-kibana-kibana
Containers:
  create-kibana-token:
    Container ID:
    Image:         map[registry:abc.xyz.com repository:bitnami/kibana tag:8-debian-12]:8.5.1
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/share/kibana/node/bin/node
    Args:
      /usr/share/kibana/helm-scripts/manage-es-token.js
      create
    State:          Waiting
      Reason:       InvalidImageName
    Ready:          False
    Restart Count:  0
    Environment:
      ELASTICSEARCH_USERNAME:                    <set to the key 'username' in secret 'elasticsearch-master-credentials'>  Optional: false
      ELASTICSEARCH_PASSWORD:                    <set to the key 'password' in secret 'elasticsearch-master-credentials'>  Optional: false
      ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES:  /usr/share/kibana/config/certs/ca.crt
    Mounts:
      /usr/share/kibana/config/certs from elasticsearch-certs (ro)
      /usr/share/kibana/helm-scripts from kibana-helm-scripts (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lngm8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  elasticsearch-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elasticsearch-master-certs
    Optional:    false
  kibana-helm-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kibana-kibana-helm-scripts
    Optional:  false
  kube-api-access-lngm8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason         Age                   From               Message
  ----     ------         ----                  ----               -------
  Normal   Scheduled      3m16s                 default-scheduler  Successfully assigned logging/pre-install-kibana-kibana-jkj7h to ip-**-***-**-***.***-***-west-1.compute.internal
  Warning  Failed         66s (x12 over 3m16s)  kubelet            Error: InvalidImageName
  Warning  InspectFailed  52s (x13 over 3m16s)  kubelet            Failed to apply default image tag "map[registry:abc.xyz.com repository:bitnami/kibana tag:8-debian-12]:8.5.1": couldn't parse image name "map[registry:abc.xyz.com repository:bitnami/kibana tag:8-debian-12]:8.5.1": invalid reference format

Any help would be greatly appreciated. Thank you

EDIT: The following is what is inside the values.yaml file for Kibana with regards to the image

image: "docker.elastic.co/kibana/kibana"
imageTag: "8.5.1"
imagePullPolicy: "IfNotPresent"

EDIT: The following is taken from the deployment manifest with regards to the image

      containers:
      - name: kibana
        securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
        image: "{{ .Values.image }}:{{ .Values.imageTag }}"
        imagePullPolicy: "{{ .Values.imagePullPolicy }}"
        env:
          {{- if .Values.elasticsearchURL }}
          - name: ELASTICSEARCH_URL
            value: "{{ .Values.elasticsearchURL }}"
          {{- else if .Values.elasticsearchHosts }}
          - name: ELASTICSEARCH_HOSTS
            value: "{{ .Values.elasticsearchHosts }}"
          {{- end }}
          - name: ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES
            value: "{{ template "kibana.home_dir" . }}/config/certs/{{ .Values.elasticsearchCertificateAuthoritiesFile }}"
          - name: SERVER_HOST
            value: "{{ .Values.serverHost }}"
          - name: ELASTICSEARCH_SERVICEACCOUNTTOKEN
            valueFrom:
              secretKeyRef:
                name: {{ template "kibana.fullname" . }}-es-token
                key: token
                optional: false

Solution

  • When you set the image tag in the resulting YAML manifest

    image: "{{ .Values.image }}:{{ .Values.imageTag }}"
    

    you expect image in the Helm values to be a string. This is true in the default Helm values, but when you run the install command

    helm upgrade ... \
      --set image.registry=${DOCKER_REGISTRY} \
      --set image.repository=$chart.image_repository \
      --set image.tag=$chart.image_tag \
      ...
    

    that particular --set syntax turns image into an object, with embedded fields registry, repository, and tag. What you're seeing in the output is a default Go-template serialization of string-keyed maps, which isn't usually useful in a Helm context.

    Probably the easiest fix here is to change the pipeline code to match the structure that's in the Helm values

    helm upgrade ... \
      --set image="${DOCKER_REGISTRY}/$chart.image_repository" \
      --set imageTag=$chart.image_tag \
      ...
    

    It would also work to change the Helm template to match the values that are being passed in. (Do one or the other, not both!)

    {{- $i := .Values.image }}
    image: "{{ $i.registry }}/{{ $i.repository }}:{{ $i.tag }}"