I have a GitLab runner with this configuration:
runners:
privileged: false
config: |
[[runners]]
[runners.kubernetes]
namespace = "managed-ng-1"
pod_labels_overwrite_allowed = ".*"
[runners.kubernetes.pod_labels]
"kubernetes.io/arch" = "amd64"
"job_id" = "${CI_JOB_ID}"
"job_name" = "${CI_JOB_NAME}"
"pipeline_id" = "${CI_PIPELINE_ID}"
"project" = "${CI_PROJECT_PATH}"
I have a .gitlab-ci.yml
file with this variables
section:
variables:
KUBERNETES_POD_LABELS_1: "karpenter.k8s.aws/instance-local-nvme=256G"
KUBERNETES_POD_LABELS_2: "kubernetes.io/arch=arm64"
When the job runs, the logs show this:
Preparing the "kubernetes" executor
"PodLabels" "karpenter.k8s.aws/instance-local-nvme" overwritten with "256G"
"PodLabels" "kubernetes.io/arch" overwritten with "arm64"
However, if I run kubectl describe pod
against the pod, these labels are not there:
Labels: job_id=297
job_name=job1
kubernetes.io/arch=amd64
pipeline_id=116
pod=runner-awq3dkxf-project-5-concurrent-0
project=root_simple-cicd-test
I explicitly added a default value for "kubernetes.io/arch" in case the label overwriting mechanism only worked if there was already a label with a value.
I don't know why this isn't working. Are there any other logs I should be looking at that might explain what is going on?
Thanks.
It turns out there is a bug in the Kubernetes executor: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29168