Search code examples
kubernetesgoogle-kubernetes-enginekubernetes-helm

nfs-server-provisioner specify volume name


When using nfs-server-provisioner is it possible to set a specific persistent volume for the NFS provisioner?

At present, I'm setting the Storage Class to use via helm:

helm install stable/nfs-server-provisioner \
--namespace <chart-name>-helm \
--name <chart-name>-nfs \
--set persistence.enabled=true \
--set persistence.storageClass=slow \
--set persistence.size=25Gi \
--set storageClass.name=<chart-name>-nfs \
--set storageClass.reclaimPolicy=Retain

and the Storage Class is built via:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  type: pd-standard
  replication-type: none

This then generates the PV dynamically when requested by a PVC.

I'm using the PV to store files for a Stateful CMS, using NFS allows for multiple pods to connect to the same file store.

What I'd like to do now is move all those images to a new set of pods on a new cluster. Rather than backing them up and going through the process of dynamically generating a PV and restoring the files to it, is it possible to retain the current PV and then connect the new PVC to it?


Solution

  • When using nfs-server-provisioner is it possible to set a specific persistent volume for the NFS provisioner?

    If we are talking if it's possible to retain the data from existing old PV which was the storage for the old NFS server and then use it with new NFS server, the answer is yes.

    I've managed to find a way to do it. Please remember that this is only a workaround.

    Steps:

    • Create a snapshot out of existing old nfs server storage.
    • Create a new disk where the source is previously created snapshot
    • Create a PV and PVC for newly created nfs-server
    • Pull the nfs-server-provisioner helm chart and edit it
    • Spawn edited nfs-server-provisioner helm chart
    • Create new PV's and PVC's with nfs-server-provisioner storageClass
    • Attach newly created PVC's to workload

    Please remember that this solution is showing the way to create PV's and PVC's for a new workload manually.

    I've included the whole process below.


    Create a snapshot out of existing old nfs server storage.

    Assuming that you created your old nfs-server with the gce-pd, you can access this disk via GCP Cloud Console to make a snapshot of it.

    I've included here more safer approach which consists of creating a copy of the gce-pd with data inside of old nfs-server. This copy will be used in new nfs-server.

    There is also a possibility to change the persistentVolumeReclaimPolicy on existing old PV to not be deleted when the PVC of old nfs-server is deleted. In this way, you could reuse existing disk in new nfs-server.

    Please refer to official documentation how to create a snapshot out of persistent disks in GCP:


    Create a new disk where the source is previously created snapshot

    You will need to create a new gce-pd disk for your new nfs-server. The earlier created snapshot will be the source for your new disk.

    Please refer to official documentation on how to create a new disk from existing snapshot:


    Create a PV and PVC for newly created nfs-server

    To ensure that the GCP's disk will be bound to the newly created nfs-server you will need to create a PV and a PVC. You can use example below but please change it accordingly to your use case:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: data-disk
    spec:
      storageClassName: standard
      capacity:
        storage: 25G
      accessModes:
        - ReadWriteOnce
      claimRef:
        namespace: default
        name: data-disk-pvc # reference to the PVC below
      gcePersistentDisk:
        pdName: old-data-disk # name of the disk created from snapshot in GCP
        fsType: ext4 
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data-disk-pvc # reference to the PV above
    spec:
      storageClassName: standard
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 25G
    

    The way that this nfs-server works is that it creates a disk in the GCP infrastructure to store all the data saved on the nfs-server. Further creation of PV's and PVC's with nfs storageClass will result in creation of a folder in the /export directory inside of a nfs-server pod.


    Pull the nfs-server-provisioner helm chart and edit it

    You will need to pull the Helm chart of the nfs-server-provisioner as it requires a reconfiguration. You can do it by invoking below command:

    • $ helm pull --untar stable/nfs-server-provisioner

    The changes are following in the templates/statefulset.yaml file:

    • Delete the parts responsible for handling persistence .Values.persistence.enabled (on the bottom). This parts are responsible for creating storage which you already have.
          {{- if not .Values.persistence.enabled }}
          volumes:
            - name: data
              emptyDir: {}
          {{- end }}
    
      {{- if .Values.persistence.enabled }}
      volumeClaimTemplates:
        - metadata:
            name: data
          spec:
            accessModes: [ {{ .Values.persistence.accessMode | quote }} ]
            {{- if .Values.persistence.storageClass }}
            {{- if (eq "-" .Values.persistence.storageClass) }}
            storageClassName: ""
            {{- else }}
            storageClassName: {{ .Values.persistence.storageClass | quote }}
            {{- end }}
            {{- end }}
            resources:
              requests:
                storage: {{ .Values.persistence.size | quote }}
      {{- end }}
    
    
          volumes:
            - name: data
              persistentVolumeClaim:
                claimName: data-disk-pvc # name of the pvc created from the disk
    

    Spawn edited nfs-server-provisioner helm chart

    You will need to run this Helm chart from local storage instead of running it from the web. The command to run it will be following:

    • $ helm install kruk-nfs . --set storageClass.name=kruk-nfs --set storageClass.reclaimPolicy=Retain

    This syntax is specific to Helm3

    Above parameters are necessary to specify the name of the storageClass as well it's reclaimPolicy.


    Create new PV's and PVC's with nfs-server-provisioner storageClass

    Example to create a PVC linked to the existing folder in nfs-server.

    Assuming that the /export directory looks like this:

    bash-5.0# ls                 
    ganesha.log
    lost+found  
    nfs-provisioner.identity  
    pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf # folder we will create a PV and PVC for
    v4old  
    v4recov  
    vfs.conf
    

    A tip! When you create a PVC with a storageClass of this nfs-server it will create a folder with the name of this PVC.

    You will need to create a PV for your share:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-example
    spec:
      storageClassName: kruk-nfs
      capacity:
        storage: 100Mi
      accessModes:
        - ReadWriteOnce
      volumeMode: Filesystem
      nfs:
        path: /export/pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf # directory to mount pv
        server: 10.73.4.71  # clusterIP of nfs-server-pod service
    

    And PVC for your PV:

    apiVersion: "v1"
    kind: "PersistentVolumeClaim"
    metadata:
      name: pv-example-claim
    spec:
      storageClassName: kruk-nfs
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi
      volumeName: pv-example # name of the PV created for a folder
    

    Attach newly created PVC's to workload

    Above manifests will create a PVC with the name of pv-example-claim that will have the contents of the pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf directory available for usage. You can mount this PVC to a pod by following this example:

    piVersion: v1
    kind: Pod
    metadata:
      name: task-pv-pod
    spec:
      volumes:
        - name: storage-mounting
          persistentVolumeClaim:
            claimName: pv-example-claim
      containers:
        - name: task-pv-container
          image: nginx
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/storage"
              name: storage-mounting
    

    After that you should be able to check if you have the data in the folder specified in above manifest:

    $ kubectl exec -it task-pv-pod -- cat /storage/hello                                                                      
    hello there