Search code examples
kubernetescertificatemetricsrback3s

metrics-server unable to authenticate the request due to certificate error


I deployed metrics-server on my cluster. the pods are running as expected.

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes

return:

error: You must be logged in to the server (Unauthorized)

the logs inside metrics server pods are like this:

I0727 13:33:23.905320       1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)                                                                                                                          │
│ [restful] 2019/07/27 13:33:26 log.go:33: [restful/swagger] listing is available at https://:8443/swaggerapi                                                                                                                                │
│ [restful] 2019/07/27 13:33:26 log.go:33: [restful/swagger] https://:8443/swaggerui/ is mapped to folder /swagger-ui/                                                                                                                       │
│ I0727 13:33:26.284542       1 serve.go:96] Serving securely on [::]:8443                                                                                                                                                                   │
│ W0727 13:33:47.904111       1 x509.go:172] x509: subject with cn=kubernetes-proxy is not in the allowed list: [system:auth-proxy]                                                                                                          │
│ E0727 13:33:47.904472       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not allowed, x509: certificate signed by unknown authority]

This error message looks like a misconfigured RBAC rule, however there is no auth-proxy cluster-role in my cluster...

subject with cn=kubernetes-proxy is not in the allowed list: [system:auth-proxy]

Can it be a simple RBAC misconfiguration at some point?

Setting up --kubelet-insecure-tls does'nt help

I am using k3s version 0.7.0 on baremetals server running Ubuntu at Scaleway


Solution

  • Ok, so here's my research when I encountered the same issue:

    K3S has the following API-server flag (default): --requestheader-allowed-names=system:auth-proxy

    I'm 'guessing' this is a cluster role but I'm not 100% sure yet since it doesn't exist within the K3S cluster by default. Looking at the logs I found that basically the API server is complaining that the CN in the TLS-cert used to identify the kubectl top request is not allowed (aka not in system:auth-proxy). Why it's using cn=kubernetes-proxy instead of the account mentioned in ~/.kube/config is unknown to me.

    Anyhow, the quick fix is as follows: Edit your /etc/systemd/system/k3s.service ExecStart-bit to look like the following:

    ExecStart=/usr/local/bin/k3s \
        server \
        --kube-apiserver-arg="requestheader-allowed-names=system:auth-proxy,kubernetes-proxy"
    

    Then run systemctl daemon-reload and restart K3S using systemctl restart k3s.

    You should now see this setting pop-up in when you run: kubectl get configmap -n kube-system "extension-apiserver-authentication" -o yaml under: requestheader-allowed-names.

    Now all you have to do is kill/restart your metrics-server pod, wait for a few minutes for it to scrape metrics (by default every 60s), and you should now be able to run kubectl top [pod|node].

    Since this is good enough for me I'll leave it here, but I am damn curious as to why/how it's using cn=kubernetes-proxy or why the cert used to identify that CN isn't signed by requestheader-client-ca-file.