Search code examples
linuxkubernetesserverk3s

K3s never load containers


I got this in my log, it keep repeating for ever.

# journalctl -u k3s full output

systemd[1]: Started Lightweight Kubernetes.
s[1859051]: I0607 1859051 tlsconfig.go:240] "Starting DynamicServingCertificateController
s[1859051]: I0607 1859051 autoregister_controller.go:141] Starting autoregister controller
s[1859051]: I0607 1859051 controller.go:83] Starting OpenAPI AggregationController
s[1859051]: I0607 1859051 apf_controller.go:317] Starting API Priority and Fairness config controller
s[1859051]: I0607 1859051 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
s[1859051]: I0607 1859051 cache.go:32] Waiting for caches to sync for autoregister controller
s[1859051]: I0607 1859051 apiservice_controller.go:97] Starting APIServiceRegistrationController
s[1859051]: I0607 1859051 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
s[1859051]: I0607 1859051 available_controller.go:491] Starting AvailableConditionController
s[1859051]: I0607 1859051 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
s[1859051]: I0607 1859051 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
s[1859051]: I0607 1859051 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
s[1859051]: I0607 1859051 customresource_discovery_controller.go:209] Starting DiscoveryController
...
...
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found
s[1859051]: W0607 1859051 handler_proxy.go:104] no RequestInfo found in the context
s[1859051]: E0607 1859051 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
s[1859051]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
s[1859051]: I0607 1859051 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
s[1859051]: certificate CN=<my-hostname> signed by CN=k3s-server-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: certificate CN=system:node:<my-hostname>,O=system:nodes signed by CN=k3s-client-ca@1653867140: notBefore=<recent_date> notAfter=<recent_date>
s[1859051]: Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: no such device
s[1859051]: Waiting for control-plane node <my-hostname> startup: nodes \"<my-hostname>\" not found

Don't matter how much I wait for, I always get this same output to listing kubernet things. From # kubectl get all -A

NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   pod/helm-install-traefik-crd-8dffs            0/1     Pending   0          12d
kube-system   pod/helm-install-traefik-hph8c                0/1     Pending   0          12d
kube-system   pod/metrics-server-7cd5fcb6b7-m74qr           0/1     Pending   0          12d
kube-system   pod/local-path-provisioner-6c79684f77-mb5nq   0/1     Pending   0          12d
kube-system   pod/coredns-d76bd69b-p76ds                    0/1     Pending   0          12d

NAMESPACE     NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes       ClusterIP   10.43.0.1      <none>        443/TCP                  12d
kube-system   service/kube-dns         ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP   12d
kube-system   service/metrics-server   ClusterIP   10.43.106.41   <none>        443/TCP                  12d

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                  0/1     1            0           12d
kube-system   deployment.apps/local-path-provisioner   0/1     1            0           12d
kube-system   deployment.apps/metrics-server           0/1     1            0           12d

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/metrics-server-7cd5fcb6b7           1         1         0       12d
kube-system   replicaset.apps/coredns-d76bd69b                    1         1         0       12d
kube-system   replicaset.apps/local-path-provisioner-6c79684f77   1         1         0       12d

NAMESPACE     NAME                                 COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik-crd   0/1           12d        12d
kube-system   job.batch/helm-install-traefik       0/1           12d        12d

Also tried to install containerd and lxc in host machine.

# pacman -S lxc containerd

Solution

  • I had some kernel updates without restarting machine after it. I thought it should work just fine without restarting.

    But

    Restarting the machine solved it.