Search code examples
kubernetes-helmsentrykubernetes-pvcargocd

Can't change Helm zookeeper's storageClass in Sentry charts values.yaml


We want to use Sentry for error logging (on-prem for our use case) but since we use k8s for everything we chose the Sentry Kubernetes charts.

We are using a cloud provider where leaving the storageClass for PVC blank/empty does not create PVCs and instead leaves the system in pending status, so we need to manually change the storageClass, which is described more or less if you dig into the values.yaml file of the Sentry for k8s Helm charts.

The magic needed is storageClass: csi-disk, which let's our cloud provider know it can attach PVCs of that type (instead of doing nothing as described above)

What we've done below also matches the values.yaml provided by bitnami: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml, which we are supposed to check as mentioned in your charts' values.yaml: https://github.com/sentry-kubernetes/charts/blob/develop/sentry/values.yaml#L714

AND all the other Bitnami charts work (PGDB etc.) I have left one example below and commented out the rest.

but no matter what I do I cannot get storageClass parsed in as the desiredManifest and I can't do I live manifest change since it's a StatefulSet, so I somehow need to get storageClass parsed correctly.

Already spent quite a lot of time trying everything, looking for typos etc.

We use Helm and ArgoCD and this is the ArgoCD app:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sentry-dev
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: sentry
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: https://sentry-kubernetes.github.io/charts
    chart: sentry
    targetRevision: 13.0.1
    helm:
      values: |
        ingress:
          enabled: true
          annotations:
            kubernetes.io/ingress.class: nginx
            nginx.ingress.kubernetes.io/use-regex: "true"
            nginx.ingress.kubernetes.io/ssl-redirect: "true"
            nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
            cert-manager.io/cluster-issuer: "letsencrypt-prod"
          hostname: ...
          tls:
          # ...
        clickhouse:
          # ..
        filestore:
          # ..
        redis:
          master:
          #...
          replica:
          #...
        rabbitmq:
          persistence:
            enabled: true
            annotations:
              everest.io/disk-volume-type: SSD
            labels:
              failure-domain.beta.kubernetes.io/region: eu-de
              failure-domain.beta.kubernetes.io/zone: 
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 8Gi
            storageClass: csi-disk
        kafka:
        # ...
        postgresql:
        # ...
        zookeeper:
          enabled: true
          persistence:
            enabled: true
            annotations:
              everest.io/disk-volume-type: SSD
            labels:
              failure-domain.beta.kubernetes.io/region: eu-de
              failure-domain.beta.kubernetes.io/zone: 
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 8Gi
            storageClass: csi-disk
            storageClassName: csi-disk # tried both storageClass and storageClassName, together and separately!

The desired manifest is always stuck at (changing metadata and any other spec also fails so somehow the Chart does not accept any values.yaml changes)

  volumeClaimTemplates:
    - metadata:
        annotations: null
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 8Gi

Also have a GH issue open: https://github.com/sentry-kubernetes/charts/issues/606


Solution

  • And finally I got my answer from the GitHub issue and I am reposting it here:

    Kafka has its own internal zookeeper dependancy, so you can do something like this:

    kafka:
      persistence:
        storageClass: csi-disk
      zookeeper:
        persistence:
          storageClass: csi-disk