Search code examples
kubernetesgoogle-kubernetes-enginekubeconfig

New Kubernetes service account appears to have cluster admin permissions


I'm experiencing a strange behavior from newly created Kubernetes service accounts. It appears that their tokens provide limitless access permissions in our cluster.

If I create a new namespace, a new service account inside that namespace, and then use the service account's token in a new kube config, I am able to perform all actions in the cluster.

# SERVER is the only variable you'll need to change to replicate on your own cluster
SERVER=https://k8s-api.example.com
NAMESPACE=test-namespace
SERVICE_ACCOUNT=test-sa

# Create a new namespace and service account
kubectl create namespace "${NAMESPACE}"
kubectl create serviceaccount -n "${NAMESPACE}" "${SERVICE_ACCOUNT}"

SECRET_NAME=$(kubectl get serviceaccount "${SERVICE_ACCOUNT}" --namespace=test-namespace -o jsonpath='{.secrets[*].name}')
CA=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.token}' | base64 --decode)

# Create the config file using the certificate authority and token from the newly created
# service account
echo "
apiVersion: v1
kind: Config
clusters:
- name: test-cluster
  cluster:
    certificate-authority-data: ${CA}
    server: ${SERVER}
contexts:
- name: test-context
  context:
    cluster: test-cluster
    namespace: ${NAMESPACE}
    user: ${SERVICE_ACCOUNT}
current-context: test-context
users:
- name: ${SERVICE_ACCOUNT}
  user:
    token: ${TOKEN}
" > config

Running that ^ as a shell script yields a config in the current directory. The problem is, using that file, I'm able to read and edit all resources in the cluster. I'd like the newly created service account to have no permissions unless I explicitly grant them via RBAC.

# All pods are shown, including kube-system pods
KUBECONFIG=./config kubectl get pods --all-namespaces

# And I can edit any of them
KUBECONFIG=./config kubectl edit pods -n kube-system some-pod

I haven't added any role bindings to the newly created service account, so I would expect it to receive access denied responses for all kubectl queries using the newly generated config.

Below is an example of the test-sa service account's JWT that's embedded in config:

{
  "iss": "kubernetes/serviceaccount",
  "kubernetes.io/serviceaccount/namespace": "test-namespace",
  "kubernetes.io/serviceaccount/secret.name": "test-sa-token-fpfb4",
  "kubernetes.io/serviceaccount/service-account.name": "test-sa",
  "kubernetes.io/serviceaccount/service-account.uid": "7d2ecd36-b709-4299-9ec9-b3a0d754c770",
  "sub": "system:serviceaccount:test-namespace:test-sa"

}

Things to consider...

  • RBAC seems to be enabled in the cluster as I see rbac.authorization.k8s.io/v1 and rbac.authorization.k8s.io/v1beta1 in the output of kubectl api-versions | grep rbac as suggested in this post. It is notable that kubectl cluster-info dump | grep authorization-mode, as suggested in another answer to the same question, doesn't show output. Could this suggest RBAC isn't actually enabled?
  • My user has cluster-admin role privileges, but I would not expect those to carry over to service accounts created with it.
  • We're running our cluster on GKE.
  • As far as I'm aware, we don't have any unorthodox RBAC roles or bindings in the cluster that would cause this. I could be missing something or am generally unaware of K8s RBAC configurations that would cause this.

Am I correct in my assumption that newly created service accounts should have extremely limited cluster access, and the above scenario shouldn't be possible without permissive role bindings being attached to the new service account? Any thoughts on what's going on here, or ways I can restrict the access of test-sa?


Solution

  • It turns out an overly permissive cluster-admin ClusterRoleBinding was bound to the system:serviceaccounts group. This resulted in all service accounts in our cluster having cluster-admin privileges.

    It seems like somewhere early on in the cluster's life the following ClusterRoleBinding was created:

    kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin  --group=system:serviceaccounts
    

    WARNING: Never apply this rule to your cluster ☝️

    We have since removed this overly permissive rule and rightsized all service account permissions.

    Thank you to the folks that provided useful answers and comments to this question. They were helpful in determining this issue. This was a very dangerous RBAC configuration and we are pleased to have it resolved.