Search code examples
kuberneteskubernetes-helmpersistent-volumes

Longhorn Volume stuck in Status deleting


So I am running a k3s cluster on 3 RHEL 8 Servers and I want to uninstall Longhorn from the cluster using helm uninstall longhorn -n longhorn-system

Now all Longhorn pods, pvcs, etc. got deleted but one volume remained that is stuck in state deleting! Here some additional infos about the volume:

Name:         pvc-f1df1bf8-96f4-4b28-a14d-2b20809610df
Namespace:    longhorn-system
Labels:       longhornvolume=pvc-f1df1bf8-96f4-4b28-a14d-2b20809610df
              recurring-job-group.longhorn.io/default=enabled
              setting.longhorn.io/remove-snapshots-during-filesystem-trim=ignored
              setting.longhorn.io/replica-auto-balance=ignored
              setting.longhorn.io/snapshot-data-integrity=ignored
Annotations:  <none>
API Version:  longhorn.io/v1beta2
Kind:         Volume
Metadata:
  Creation Timestamp:             2023-08-21T07:31:56Z
  Deletion Grace Period Seconds:  0
  Deletion Timestamp:             2023-08-24T09:32:05Z
  Finalizers:
    longhorn.io
  Generation:        214
  Resource Version:  7787140
  UID:               6ffb214d-8ed7-4b7b-910e-a2936b764223
Spec:
  Standby:           false
  Access Mode:       rwo
  Backing Image:
  Base Image:
  Data Locality:     disabled
  Data Source:
  Disable Frontend:  false
  Disk Selector:
  Encrypted:          false
  Engine Image:       longhornio/longhorn-engine:v1.4.1
  From Backup:
  Frontend:           blockdev
  Last Attached By:
  Migratable:         false
  Migration Node ID:
  Node ID:
  Node Selector:
  Number Of Replicas:  3
  Recurring Jobs:
  Replica Auto Balance:           ignored
  Restore Volume Recurring Job:   ignored
  Revision Counter Disabled:      false
  Size:                           4294967296
  Snapshot Data Integrity:        ignored
  Stale Replica Timeout:          30
  Unmap Mark Snap Chain Removed:  ignored
Status:
  Actual Size:  0
  Clone Status:
    Snapshot:
    Source Volume:
    State:
  Conditions:
    Last Probe Time:
    Last Transition Time:  2023-08-21T07:31:57Z
    Message:
    Reason:
    Status:                False
    Type:                  toomanysnapshots
    Last Probe Time:
    Last Transition Time:  2023-08-21T07:31:57Z
    Message:
    Reason:
    Status:                True
    Type:                  scheduled
    Last Probe Time:
    Last Transition Time:  2023-08-21T07:31:57Z
    Message:
    Reason:
    Status:                False
    Type:                  restore
  Current Image:           longhornio/longhorn-engine:v1.4.1
  Current Node ID:
  Expansion Required:      false
  Frontend Disabled:       false
  Is Standby:              false
  Kubernetes Status:
    Last PVC Ref At:  2023-08-24T09:32:04Z
    Last Pod Ref At:  2023-08-24T09:24:48Z
    Namespace:        backend
    Pv Name:
    Pv Status:
    Pvc Name:         pvc-longhorn-db
    Workloads Status:
      Pod Name:          wb-database-deployment-8685cbdcfc-2dfs2
      Pod Status:        Failed
      Workload Name:     wb-database-deployment-8685cbdcfc
      Workload Type:     ReplicaSet
  Last Backup:
  Last Backup At:
  Last Degraded At:
  Owner ID:              node3
  Pending Node ID:
  Remount Requested At:  2023-08-24T09:23:55Z
  Restore Initiated:     false
  Restore Required:      false
  Robustness:            unknown
  Share Endpoint:
  Share State:
  State:                 deleting
Events:                  <none>

I tried to remove the finalizers but that didn't help for me. Does anyone have an idea why that volume can't be uninstalled?


Solution

  • If you deleted the PVC and also used the command for the finalizers it means some resources associated with this PV are still running. Maybe in a different namespace. First, run this command again to make sure the finalizers are applied correctly.

    kubectl patch pv <pv_name> -p '{"metadata":{"finalizers":null}}'
    
                              and then 
    
    kubectl delete pv <pv_name> --grace-period=0 --force 
    

    If still PV not deleted then check the resources with this command.

    PVC_NAME="<pvc-name>"; kubectl get pods,deployments,statefulsets,daemonsets,replicasets,jobs,cronjobs --all-namespaces -o json | jq --arg PVC "$PVC_NAME" '.items[] | select(.spec.template.spec.volumes[]?.persistentVolumeClaim.claimName == $PVC) | .metadata.namespace + "/" + .metadata.name + " (" + .kind + ")"'
    

    This will return the info where this PV is being used and then delete those resources manually. Once you delete those resources PV will be deleted. I hope this helps.