Search code examples
kubernetesglusterfs

Unable to reuse existing Persistent Volume (GlusterFS)


Description: Unable to bind a new PVC to an existing PV that already contains data from previous run (and was dynamically created using gluster storage class).

  • Installed a helm release which created PVC and dynamically generated PV from GlusterStorage class.
  • However due to some reason, we need to bring down the release (helm del) and re-install it (helm install). However, want to use the existing PV instead of creating a new one.

I tried a few things: - Following the instruction here: https://github.com/kubernetes/kubernetes/issues/48609. However, that did not work for GlusterFS storage solution since after I tried the needed steps, it complained:

  Type     Reason            Age                From                              Message
  ----     ------            ----               ----                              -------
  Warning  FailedScheduling  <unknown>          default-scheduler                 error while running "VolumeBinding" filter plugin for pod "opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>          default-scheduler                 error while running "VolumeBinding" filter plugin for pod "opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         <unknown>          default-scheduler                 Successfully assigned connectus/opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm to rahulk8node1-virtualbox
  Warning  FailedMount       31s (x7 over 62s)  kubelet, rahulk8node1-virtualbox  MountVolume.NewMounter initialization failed for volume "pvc-dc52b290-ae86-4cb3-aad0-f2c806a23114" : endpoints "glusterfs-dynamic-dc52b290-ae86-4cb3-aad0-f2c806a23114" not found
  Warning  FailedMount       30s (x7 over 62s)  kubelet, rahulk8node1-virtualbox  MountVolume.NewMounter initialization failed for volume "pvc-735baedf-323b-47bc-9383-952e6bc5ce3e" : endpoints "glusterfs-dynamic-735baedf-323b-47bc-9383-952e6bc5ce3e" not found

Apparently besides the PV, we would also need to persist gluster-dynamic-endpoints and glusterfs-dynamic-service. However, these are created in the pod namespace and since the namespace is removed as part of helm del, it also deletes these endpoints and svc.

I looked around other pages related to GlusterFS endpoint missing: https://github.com/openshift/origin/issues/6331 but that does not applies to the current version of Storage class. When I added endpoint: "heketi-storage-endpoints" to the Storage class parameters, I got the following error when creating the PVC:

Failed to provision volume with StorageClass "glusterfs-storage": invalid option "endpoint" for volume plugin kubernetes.io/glusterfs

This option was removed in 2016 - see https://github.com/gluster/gluster-kubernetes/issues/87.

Is there anyway to use existing PV from a new PVC?


Solution

  • I would like to suggest a different approach.

    You can use this annotation on the PVC, it will skip deleting the resource on delete.

    helm.sh/resource-policy: "keep"
    

    Here is an example:

    {{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: {{ template "bitcoind.fullname" . }}
      annotations:
        "helm.sh/resource-policy": keep
      labels:
        app: {{ template "bitcoind.name" . }}
        chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
        release: "{{ .Release.Name }}"
        heritage: "{{ .Release.Service }}"
    spec:
      accessModes:
        - {{ .Values.persistence.accessMode | quote }}
      resources:
        requests:
          storage: {{ .Values.persistence.size | quote }}
    {{- if .Values.persistence.storageClass }}
    {{- if (eq "-" .Values.persistence.storageClass) }}
      storageClassName: ""
    {{- else }}
      storageClassName: "{{ .Values.persistence.storageClass }}"
    {{- end }}
    {{- end }}
    {{- end }} 
    

    You can also use parameters as seen here, where they implemented an option to flag (which is either true or false) while you install your helm chart.

    persistence.annotations."helm.sh/resource-policy"
    

    You can also include a configurable parameters to set the name of the pvc you want to reuse as seen here.

    In this example you can set persistence.existingClaim=mysql-pvc during your chart install.

    So mixing everything you can have something that should look like this when you run your helm install:

    helm install --namespace myapp --set existingClaim=mysql-pvc stable/myapp