When I kustomize the cockroachdb helm chart with kubectl kustomize
, the wrong kubernetes api version is used for some ressources.
kustomization
piVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: apps
generators:
- cockroachdbChart.yaml
Helm Chart Inflator:
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
name: crdb
name: cockroachdb
repo: https://charts.cockroachdb.com/
version: 10.0.3
releaseName: crdb
namespace: apps
IncludeCRDs: true
When I now run kubectl kustomize --enable-helm
in the directory with those files, some are rendered with the v1beta1
version, even if the kubernetes version only supports version v1
:
» kubectl kustomize --enable-helm crdb-test | grep -A 5 -B 1 v1beta
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
--
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/instance: crdb
app.kubernetes.io/managed-by: Helm
These are the kubectl and helm versions I have installed:
» kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.25.4
» helm version --short
v3.10.3+gd506314
Is this a kustomize error?
Can I set the api Version that kustomize uses in the kustomization file?
Kustomize doesn't know anything about what API versions are supported by your target environment, nor does it change the API versions in your source manifests.
If you're getting output with inappropriate API versions, the problem is not with Kustomize but with the source manifests.
We see the same behavior if we remove Kustomize from the equation:
$ helm template cockroachdb/cockroachdb | grep -B1 CronJob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
--
apiVersion: batch/v1beta1
kind: CronJob
metadata:
The problem here is the logic in the Helm chart, which looks like this:
{{- if and .Values.tls.enabled (and .Values.tls.certs.selfSigner.enabled (not .Values.tls.certs.selfSigner.caProvided)) }}
{{- if .Values.tls.certs.selfSigner.rotateCerts }}
{{- if .Capabilities.APIVersions.Has "batch/v1/CronJob" }}
apiVersion: batch/v1
{{- else }}
apiVersion: batch/v1beta1
{{- end }}
That relies on the value of .Capabilities.APIVersions.Has "batch/v1/CronJob"
, which requires Helm to query the remote Kubernetes environment to check if the server supports that API version. That doesn't happen when using helm template
(or Kustomize, which is really just wrapping helm template
when exploding helm charts).
The correct fix would be for the CockroachDB folks to update the helm charts to introduce a variable that controls this logic explicitly.
You can patch this in your kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generators:
- cockroachdbChart.yaml
patches:
- target:
kind: CronJob
patch: |
- op: replace
path: /apiVersion
value: batch/v1
Which results in:
$ kustomize build --enable-helm | grep -B1 CronJob
apiVersion: batch/v1
kind: CronJob
--
apiVersion: batch/v1
kind: CronJob