Search code examples
kuberneteskubeletk8s-rolebindingk8s-cluster-role

What's the story of kubelet's authorization mechanics and unnecessary ClusterRoleBindings?


i try to understand a simple and basic kubeadm init control plane setup.

The kubeconfig file in /etc/kubernetes/kubelet.conf is used by the kubelet process at startup time:

ubuntu@c1:~$ ps -ef | grep kubelet | sed s/\\s--/\\n--/g
root       35361       1  1 Mar17 ?        00:51:48 /usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf
--config=/var/lib/kubelet/config.yaml
--container-runtime=remote
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
--pod-infra-container-image=registry.k8s.io/pause:3.8

It tells to use a "user" named "system:node:c1", where "c1" is my node's name:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ✂ ✂ ✂
    server: https://k8scp:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:c1
  name: system:node:c1@kubernetes
current-context: system:node:c1@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:c1
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

As far as i understand was this kublet's identity established by it's certificat's CN during kubeadm init cert phase and is used by the kubelet to authenticate against the api-server.

Now, poking around shows up a ClusterRoleBinding named "system:node" which has a "roleRef" of kind "ClusterRole" and name "system:node". But(!) there is no "subject" entry. it's missing:

ubuntu@c1:~$ kubectl get clusterrolebinding system:node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2023-03-14T14:22:57Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:node
  resourceVersion: "144"
  uid: 256e1f6b-e491-45d0-beda-1e250b260f46
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
ubuntu@c1:~$ kubectl describe clusterrolebinding system:node
Name:         system:node
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  ClusterRole
  Name:  system:node
Subjects:
  Kind  Name  Namespace
  ----  ----  ---------
ubuntu@c1:~$

The resource specification of a ClusterRoleBinding tells that there has to be a subject array. I guess if that array is empty it's still a valid subject. So that binding cannot associate any role to the before mentioned kubelet's authentication context.

kubelet processes life outside the orchestration boundaries. They are managed by systemd (at least on ubuntu nodes). How do kubelets get authorized and what roles/rights do they have?


Solution

  • finally, a little bit over four months later, i stumbled upon the answer (by myself?, by some pitying, compassionate kami-sama? we will never know, but for all that lost souls out there, desperately in search for enlightenment, here's the answer:

    as the docs say, there are multiple authorization modes, and node-mode specifically authorizes API requests made by kubelets:

    In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:[nodeName]. This group and user name format match the identity created for each kubelet as part of kubelet TLS bootstrapping.

    The value of nodeName must match precisely the name of the node as registered by the kubelet. By default, this is the host name.

    my cluster was set up according the docs with kubeadm. the critical part here is the API server configuration; mine has both RBAC and Node authorization modes enabled:

    ubuntu@c1:~$ ps -eF | sed 's/--/\n--/g' | grep 'kube-apiserver' -z
    ✂
    root        1240    1012  5 279405 370184 0 Aug03 ?        02:28:24 kube-apiserver
    --✂
    --authorization-mode=Node,RBAC
    --enable-admission-plugins=NodeRestriction
    --✂
    ✂
    

    concerning the "unnecessary clusterrole and clusterrolebinding", as stated in the title of this question's thread, the same doc's section on RBAC Node Permissions shed's some more nourishing light on that topic (for the interested ones, i give their full explanation, mostly regarding backward compatibility):

    In 1.6, the system:node cluster role was automatically bound to the system:nodes group when using the RBAC Authorization mode.

    In 1.7, the automatic binding of the system:nodes group to the system:node role is deprecated because the node authorizer accomplishes the same purpose with the benefit of additional restrictions on secret and configmap access. If the Node and RBAC authorization modes are both enabled, the automatic binding of the system:nodes group to the system:node role is not created in 1.7.

    In 1.8, the binding will not be created at all.

    When using RBAC, the system:node cluster role will continue to be created, for compatibility with deployment methods that bind other users or groups to that role.