I can't get a rolebinding right in order to get node status from an app which runs in a pod on GKE.
I am able to create a pod from there but not get node status. Here is the role I am creating:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["nodes"]
verbs: ["get", "watch", "list"]
This is the error I get when I do a getNodeStatus:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "nodes \"gke-cluster-1-default-pool-36c26e1e-2lkn\" is forbidden: User \"system:serviceaccount:default:sa-poc\" cannot get nodes/status at the cluster scope: Unknown user \"system:serviceaccount:default:sa-poc\"",
"reason": "Forbidden",
"details": {
"name": "gke-cluster-1-default-pool-36c26e1e-2lkn",
"kind": "nodes"
},
"code": 403
}
I tried with some minor variations but did not succeed.
Kubernetes version on GKE is 1.8.4-gke.
Subresource permissions are represented as <resource>/<subresource>
, so in the role, you would specify resources: ["nodes","nodes/status"]