Our Kubernetes 1.6 cluster had certificates generated when the cluster was built on April 13th, 2017.
On December 13th, 2017, our cluster was upgraded to version 1.8, and new certificates were generated [apparently, an incomplete set of certificates].
On April 13th, 2018, we started seeing this message within our Kubernetes dashboard for api-server:
[authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]
Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf
at the certificates generated on Dec 13th [apiserver-kubelet-client.crt
and apiserver-kubelet-client.crt
], but continue to see the above error.
Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf
at different certificates generated on Dec 13th [apiserver.crt
and apiserver.crt
] (I honestly don't understand the difference between these 2 sets of certs/keys), but continue to see the above error.
Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf
at non-existent files, and none of the kube* services would start, with /var/log/syslog
complaining about this:
Apr 17 17:50:08 kuber01 kubelet[2422]: W0417 17:50:08.181326 2422 server.go:381] invalid kubeconfig: invalid configuration: [unable to read client-cert /tmp/this/cert/does/not/exist.crt for system:node:node01 due to open /tmp/this/cert/does/not/exist.crt: no such file or directory, unable to read client-key /tmp/this/key/does/not/exist.key for system:node:node01 due to open /tmp/this/key/does/not/exist.key: no such file or directory]
Any advice on how to overcome this error, or even troubleshoot it at a more granular level? Was considering regenerating certificates for api-server (kubeadm alpha phase certs apiserver
), based on instructions within https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-certs ... but not sure if I'd be doing more damage.
Relatively new to Kubernetes, and the gentleman who set this up is not available for consult ... any help is appreciated. Thanks.
Each node within the Kubernetes cluster contains a config file for running kubelet ... /etc/kubernetes/kubelet.conf
... and this file is auto-generated by kubeadm. During this auto-generation, kubeadm uses /etc/kubernetes/ca.key
to create a node-specific file, /etc/kubernetes/kubelet.conf
, within which are two very important pieces ... client-certificate-data and client-key-data. My original thought process led me to believe that I needed to find the corresponding certificate file & key file, renew those files, convert both to base64, and use those values within kubelet.conf
files across the cluster ... this thinking was not correct.
Instead, the fix was to use kubeadm to regenerate kubectl.conf
on all nodes, as well as admin.conf
, controller-manager.conf
, and scheduler.conf
on the cluster's master node. You'll need /etc/kubernetes/pki/ca.key
on each node in order for your config files to include valid data for client-certificate-data and client-key-data.
Pro tip: make use of the --apiserver-advertise-address
parameter to ensure your new config files contain the correct IP address of the node hosting the kube-apiserver service.