Search code examples
kuberneteskubectl

kubectl get nodes intermittently working. Intermittent 6443 port refused?


I'm trying to get kubernetes to work with 3 of my Linux machines. Setting 1 up as a control-plane and the other 2 as nodes. I ran sudo kubeadm init on 1 and the corresponding sudo kubeadm join... the other 2 machines. I also ran the following on the control plane machine (192.169.0.10)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

What's happening is that kubectl get nodes reports the 2 machines for some time, but after a while it starts saying that the connection was refused. I'm unable to find any resource on debugging this. Would really appreciate the help.

normal@machine-0:~$ kubectl get nodes
NAME              STATUS     ROLES           AGE   VERSION
machine-0       Ready      control-plane   12m   v1.29.0
machine-1       Ready      <none>          11m   v1.28.2
machine-2       NotReady   <none>          11m   v1.28.2
normal@machine-0:~$ kubectl get nodes
The connection to the server 192.169.0.10:6443 was refused - did you specify the right host or port?

I've ensured that the swap is off, by running sudo swapoff -a and also commenting the swap line in /etc/fstab. Also there disabled firewalls on all 3 machines (and router)

normal@machine-0:~$ kubectl logs -n kube-system kube-proxy-x22sp
I0123 02:12:50.525956       1 server_others.go:72] "Using iptables proxy"
I0123 02:12:50.533523       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.169.0.10"]
I0123 02:12:50.571635       1 conntrack.go:58] "Setting nf_conntrack_max" nfConntrackMax=3145728
I0123 02:12:50.585115       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0123 02:12:50.585129       1 server_others.go:168] "Using iptables Proxier"
I0123 02:12:50.586135       1 server_others.go:503] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined"
I0123 02:12:50.586143       1 server_others.go:529] "Defaulting to no-op detect-local"
I0123 02:12:50.586147       1 server_others.go:503] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined"
I0123 02:12:50.586150       1 server_others.go:529] "Defaulting to no-op detect-local"
I0123 02:12:50.586170       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0123 02:12:50.586323       1 server.go:865] "Version info" version="v1.29.1"
I0123 02:12:50.586336       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0123 02:12:50.586701       1 config.go:97] "Starting endpoint slice config controller"
I0123 02:12:50.586717       1 config.go:188] "Starting service config controller"
I0123 02:12:50.586765       1 shared_informer.go:311] Waiting for caches to sync for service config
I0123 02:12:50.586747       1 config.go:315] "Starting node config controller"
I0123 02:12:50.586805       1 shared_informer.go:311] Waiting for caches to sync for node config
I0123 02:12:50.586737       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0123 02:12:50.687512       1 shared_informer.go:318] Caches are synced for node config
I0123 02:12:50.687561       1 shared_informer.go:318] Caches are synced for service config
I0123 02:12:50.687556       1 shared_informer.go:318] Caches are synced for endpoint slice config

Looking at kubelet logs

normal@machine-0:~$ journalctl -u kubelet -n 100 --no-pager
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.409114  521631 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-proxy pod=kube-proxy-x22sp_kube-system(ba1740f8-f157-4c6b-802e-5db5b9509e78)\"" pod="kube-system/kube-proxy-x22sp" podUID="ba1740f8-f157-4c6b-802e-5db5b9509e78"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: I0123 02:16:16.472675  521631 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: I0123 02:16:16.472727  521631 container_gc.go:88] "Attempting to delete unused containers"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.505981  521631 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35a643bf9149a139328b06efcddc0703c6e253a82e513bb59c25521b7aec1d8f\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" podSandboxID="35a643bf9149a139328b06efcddc0703c6e253a82e513bb59c25521b7aec1d8f"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.506015  521631 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35a643bf9149a139328b06efcddc0703c6e253a82e513bb59c25521b7aec1d8f\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" sandboxID="35a643bf9149a139328b06efcddc0703c6e253a82e513bb59c25521b7aec1d8f"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.536464  521631 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19d0c69bd8854c454ba1b87aeb0bd3720adb23f2bdf8b131ec238d9f54d17b10\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" podSandboxID="19d0c69bd8854c454ba1b87aeb0bd3720adb23f2bdf8b131ec238d9f54d17b10"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.536498  521631 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19d0c69bd8854c454ba1b87aeb0bd3720adb23f2bdf8b131ec238d9f54d17b10\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" sandboxID="19d0c69bd8854c454ba1b87aeb0bd3720adb23f2bdf8b131ec238d9f54d17b10"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.566121  521631 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b57736a3eef831e575f486451d28e7a01e1116f5dddf0ac5d7f8888003ebd7f2\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" podSandboxID="b57736a3eef831e575f486451d28e7a01e1116f5dddf0ac5d7f8888003ebd7f2"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.566148  521631 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b57736a3eef831e575f486451d28e7a01e1116f5dddf0ac5d7f8888003ebd7f2\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" sandboxID="b57736a3eef831e575f486451d28e7a01e1116f5dddf0ac5d7f8888003ebd7f2"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.593910  521631 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b54a5561b808f5aa1cde1afb8756ab15d5e6bd409556d384c2ae731f480566fc\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" podSandboxID="b54a5561b808f5aa1cde1afb8756ab15d5e6bd409556d384c2ae731f480566fc"
Jan 23 02:16:16 amd-mi210-0 kubelet[521631]: E0123 02:16:16.593936  521631 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b54a5561b808f5aa1cde1afb8756ab15d5e6bd409556d384c2ae731f480566fc\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")" sandboxID="b54a5561b808f5aa1cde1afb8756ab15d5e6bd409556d384c2ae731f480566fc"

Solution

  • Something is wrong with your kubernetes configuration.

    To do a fresh install you can follow this: https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/

    Remember to first purge your control-plane machine with kubernetes

    sudo kubeadm reset
    
    # Remove all packages related to Kubernetes
    sudo apt remove -y kubeadm kubectl kubelet kubernetes-cni 
    sudo apt purge -y kube*
    
    
    # Remove parts
    
    sudo apt autoremove -y
    
    sudo apt-get purge kube*
    sudo rm -rf /etc/kubernetes
    sudo apt-get clean
    
    # Remove all folder associated to kubernetes, etcd, and docker
    sudo rm -rf ~/.kube
    sudo rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/lib/etcd2/ /var/run/kubernetes ~/.kube/* 
    
    # Clear the iptables
    sudo iptables -F && iptables -X
    sudo iptables -t nat -F && iptables -t nat -X
    sudo iptables -t raw -F && iptables -t raw -X
    sudo iptables -t mangle -F && iptables -t mangle -X
    
    

    Post installation you can look at kubectl get all --all-namespaces to see if all pods are up and running. Important that coredns is running!