Search code examples
kuberneteskind

Kubernetes kind : sudo kind delete cluster failed to lock config file: open /root/.kube/config.lock: read-only file system but no such file


I'm testing k8s using kind.

I created cluster:

vagrant@vagrant:~$ sudo kind get clusters
nodes-test

Now I like to delete this cluster with sudo kind delete cluster but getting:

Deleting cluster "kind" ...
failed to update kubeconfig: failed to lock config file: open /root/.kube/config.lock: read-only file system
ERROR: failed to delete cluster "kind": failed to lock config file: open /root/.kube/config.lock: read-only file system

But when going to the path I don't see the file:

vagrant@vagrant:~$ sudo su
root@vagrant:/home/vagrant# cd /root
root@vagrant:~# cd .kube/
root@vagrant:~/.kube# ls -la
total 20
drwxr-xr-x 3 root root 4096 Sep  8 12:07 .
drwx------ 6 root root 4096 Sep  8 12:26 ..
drwxr-x--- 4 root root 4096 Sep  8 12:00 cache
-rw------- 1 root root 5622 Sep  8 11:59 config

The config file:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ss==
    server: https://127.0.0.1:44155
  name: kind-nodes-test
contexts:
- context:
    cluster: kind-nodes-test
    user: kind-nodes-test
  name: kind-nodes-test
current-context: kind-nodes-test
kind: Config
preferences: {}
users:
- name: kind-nodes-test
  user:
    client-certificate-data: sss
    client-key-data: sdasd=
Netid State  Recv-Q Send-Q  Local Address:Port   Peer Address:PortProcess
udp   UNCONN 0      0       127.0.0.53%lo:domain      0.0.0.0:*    users:(("systemd-resolve",pid=10281,fd=13))
udp   UNCONN 0      0      10.0.2.15%eth0:bootpc      0.0.0.0:*    users:(("systemd-network",pid=10265,fd=18))
tcp   LISTEN 0      4096    127.0.0.53%lo:domain      0.0.0.0:*    users:(("systemd-resolve",pid=10281,fd=14))
tcp   LISTEN 0      128           0.0.0.0:ssh         0.0.0.0:*    users:(("sshd",pid=711,fd=3))
tcp   LISTEN 0      4096        127.0.0.1:44155       0.0.0.0:*    users:(("docker-proxy",pid=1109,fd=4))
tcp   LISTEN 0      128              [::]:ssh            [::]:*    users:(("sshd",pid=711,fd=4))          

Update

Also when invoking the command sudo kind delete cluster --name nodes-test I'm getting:

vagrant@vagrant:~$ sudo kind delete cluster --name nodes-test
Deleting cluster "nodes-test" ...
failed to update kubeconfig: failed to lock config file: open /root/.kube/config.lock: read-only file system
ERROR: failed to delete cluster "nodes-test": failed to delete nodes: command "docker rm -f -v nodes-test-worker nodes-test-control-plane" failed with error: exit status 1
Command Output: Error response from daemon: container 4cf8b399150fffb9899ff963adacd255fe3b0b7aebdf6feda11ea1646b541340: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/1ba0d047b313dabc396f61cf1f2b3086d5cad05372de37410326d81e02f6130b: read-only file system
Error response from daemon: container 8b2fa50dc006191add9b64e89b834a0c952344433e61d1b9795b5c2470bf3e05: driver "overlay2" failed to remove root filesystem: unlinkat /var/lib/docker/overlay2/3a35de9a7ad18fbf3358c0be3d01a51ba7c7558ed49242d659a0e365c58798ae: read-only file system

Solution

  • When you type the command to get your clusters, it lists the name of the clusters:

    $ sudo kind get clusters
    nodes-test
    

    So your cluster name is nodes-test. But typing sudo kind delete cluster only tries to delete the default cluster name (which by default is named kind). This is also why the delete command says Deleting cluster "kind" .... You need to run the delete command with the cluster name. You can even read this in the documentation for the delete command

    If the flag --name is not specified, kind will use the default cluster context name kind and delete that cluster.

    Note: By design, requesting to delete a cluster that does not exist will not return an error. This is intentional and is a means to have an idempotent way of cleaning up resources.

    You need to specify your cluster name. Try this: sudo kind delete cluster --name nodes-test