Search code examples
kuberneteskubectlbearer-token

kubectl --token=$TOKEN doesn't run with the permissions of the token


When I am using the command kubectl with the --token flag and specify a token, it still uses the administrator credentials from the kubeconfig file.

This is what I did:

NAMESPACE="default"
SERVICE_ACCOUNT_NAME="sa1"
kubectl create sa $SERVICE_ACCOUNT_NAME
kubectl create clusterrolebinding list-pod-clusterrolebinding \
     --clusterrole=list-pod-clusterrole \
     --serviceaccount="$NAMESPACE":"$SERVICE_ACCOUNT_NAME"
kubectl create clusterrole list-pod-clusterrole \
     --verb=list \
     --resource=pods

TOKEN=`kubectl get secrets $(kubectl get sa $SERVICE_ACCOUNT_NAME -o json | jq -r '.secrets[].name') -o json | jq -r '.data.token' | base64 -d`

# Expected it will fail but it doesn't because it uses the admin credentials
kubectl get secrets --token $TOKEN

The token have permissions to list pods, so I expect the kubectl get secrets --token $TOKEN to fail but it doesn't because it still uses the context of the administrator.

I don't create new context, I know kubectl have this ability to use bearer token and want to understand how to do it.

I also tried this kubectl get secrets --insecure-skip-tls-verify --server https://<master_ip>:6443 --token $TOKENand it also didn't return a Forbidden result.

If you test it you can use katacoda:
https://www.katacoda.com/courses/kubernetes/playground

EDIT:

I tried to create context with this:

NAMESPACE="default"
SERVICE_ACCOUNT_NAME="sa1"
CONTEXT_NAME="sa1-context"
USER_NAME="sa1-username"
CLUSTER_NAME="kubernetes"

kubectl create sa "$SERVICE_ACCOUNT_NAME" -n "$NAMESPACE"
SECRET_NAME=`kubectl get serviceaccounts $SERVICE_ACCOUNT_NAME -n $NAMESPACE -o json | jq -r '.secrets[].name'`
TOKEN=`kubectl get secrets $SECRET_NAME -n $NAMESPACE -o json | jq -r '.data | .token' | base64 -d`

# Create user with the JWT token of the service account
echo "[*] Setting credentials for user: $USER_NAME"
kubectl config set-credentials $USER_NAME --token=$TOKEN

# Makue sure the cluster name is correct !!!
echo "[*] Setting context: $CONTEXT_NAME"
kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NAME \
--namespace=$NAMESPACE \
--user=$USER_NAME

But when I tried kubectl get secrets --context $CONTEXT_NAME it still succeeded and was supposed fail because it doesn't have permissions for that.

Edit 2:
Option to run it correctly based on the kubectl API:

kubectl get pods --token `cat /home/natan/token` -s https://<ip>:8443 --certificate-authority /root/.minikube/ca.crt --all-namespaces

Or without TLS:

kubectl get pods --token `cat /home/natan/token` -s https://<ip>:8443 --insecure-skip-tls-verify --all-namespaces

Solution

  • This is tricky because if you are using client certificate for authenticating to kubernetes API server overriding token with kubectl is not going to work because the authentication with certificate happens early in the process during the TLS handshake.Even if you provide a token in kubectl it will be ignored.This is the reason why you are able to get secrets because the client certificate have permission to get secrets and the token is ignored.

    So if you want to use kubectl token the kubeconfig file should not have client certificate and then you can override that token with --token flag in Kubectl. See the discussion in the question on how to create a kubeconfig file for a service account token.

    Also you can view the bearer token being sent in kubectl command using command

    kubectl get pods --v=10 2>&1 | grep -i bearer