Search code examples
kuberneteskubernetes-podkubernetes-servicek3skubernetes-apiserver

How to start K3s server after running k3s-killall.sh script


I was having K3s cluster with below pods running:

kube-system   pod/calico-node-xxxx                          
kube-system   pod/calico-kube-controllers-xxxxxx   
kube-system   pod/metrics-server-xxxxx
kube-system   pod/local-path-provisioner-xxxxx
kube-system   pod/coredns-xxxxx
xyz-system    pod/some-app-xxx
xyz-system    pod/some-app-db-xxx

I want to stop all of the K3s pods & reset the containerd state, so I used /usr/local/bin/k3s-killall.sh script and all pods got stopped (at least I was not able to see anything in watch kubectl get all -A except The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? message)

Can someone tell me how to start the k3s server up because now after firing kubectl get all -A I am getting message The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

PS:

  • When I ran k3s server command, for fraction of seconds I can see the same above pods(with same pod ids) that I mentioned while the command is running. After few seconds, command get exited and again same message The connection to the... start displaying.

Does this means that k3s-killall.sh have not deleted my pods as it is showing the same pods with same ids ( like pod/some-app-xxx ) ?


Solution

    1. I think you need to restart K3s via systemd if you want your cluster back after kill. Try command:
      sudo systemctl restart k3s This is supported by the installation script for systemd and openrc. Refer rancher doc

    2. The pod-xxx id will remain same as k3s-killall.sh doesn't uninstall k3s (you can verify this, after k3s-killall script k3s -v will return output) and it only restart the pods with same image. The Restarts column will increase the count of all pods.