Search code examples
kubernetesbackupnfsrancher

K8s nfs static pv full, how to update "pvc/pv" using new backup nfs static pv without reinstall all


Question:

I have an elastic cluster using static NFS PV. How I can move those data to another server has more space (already did, just copy it) and keep the application apply new backup data without reinstalling everything. I use Rancher on Centos 7 (without VM Sphere).

What I Try:

  • Update PV NFS path and limitStorage to new backup server but k8s don't allow it.
  • Update existing PVC using new PV on a new backup server but k8s still don't allow.

error message:

# persistentvolumeclaims "data-core-container-log-elasticsearch-data-0" was not valid:
# * spec: Forbidden: is immutable after creation except resources.requests for bound claims


Solution

  • There might be several solutions, this is what worked out for me taking everything that you mentioned in the above scenario.

    Taken a stateful set with nfs volume claim at 10.20.4.101, assuming nfs drive is filled up. I relocated and copied all the data to another vm 10.20.4.102.

    Now while I keep my old configurations alive, I created a new PV with 10.20.4.102 in the same namespace with a different label than the original one something like this

    New Settings
    
    metadata:
      name: my-pv-1
      namespace: pv-test
      labels:
        volume-type: pv-1
    
    Old Settings
    
    metadata:
      name: my-pv
      namespace: pv-test
      labels:
        volume-type: pv
    

    This creates a new PV in your namespace you can see in the kubectl get pv with status Available while other as Bound

    Now update your statefulset yaml file fields volume-type to the new one same as the label in new pv and also the change the name in volumeClaimTemplates to something new. Do not apply this settings now.

    New
    
    volumeClaimTemplates:
      - metadata:
          name: pv-data-1
          namespace: pv-test
        selector:
           matchLabels:
           volume-type: pv-1
    
    Old
    
    volumeClaimTemplates:
      - metadata:
          name: pv-data
          namespace: pv-test
        selector:
           matchLabels:
           volume-type: pv
    

    As you can surely cannot apply directly which would throw an error something like Forbidden: updates to statefulset spec for fields other than `replicas`, `template` and `updateStrategy` are forbidden..

    Now either you can delete and recreate the whole statefulset with a slight downtime. Or you can do this small trick with --cascade=false flag

    kubectl delete statefulset mystatefulset -n pv-test --cascade=false
    

    This will just delete the statefulset not the pods within the stateful set, if you keep a watch on all the resources in the namespace it will delete the statefulset but not the pods or anyresources. Keeping all the applications still accessing to the applications running.

    Now apply the updated statefulset this will create the new statefulset with a different pvc. But your pods will be still referring to the old pvc and pv.

    Now just delete the pod using kubectl delete pod mypod-0 -n pv-test

    This will delete the pod, but statefulset in the background creates a new pod with new pvc as this gets deleted. Now if you kubectl pv and pvc you will observe there will be an additional pvc and pv with Available will be turned into Bound and will be claimed by the pvc.

    Manually delete all the pods, and the statefulset takes care of recreation. After everything is done, manually delete the old pvc first and then old pv later.

    You might have tried all of this and know all of them, just to keep it clear I wrote all the steps here instead of vaguely explaining.

    Hope this is helpful.