Search code examples
kubernetesgoogle-cloud-platformgoogle-kubernetes-enginegcloudkubectl

Permission error using kubectl after GKE Kubernetes cluster migration from one organization to another


I had a GKE cluster my-cluster created in a project that belonged to organization org1.

When the cluster was created I logged in with [email protected] using gcloud auth login and configured the local kubeconfig using gcloud container clusters get-credentials my-cluster --region europe-west4 --project project.

Recently we had to migrate this project (with the GKE cluster) to another organization, org2. We did it sucessfully following the documentation.

The IAM owner in org2 is [email protected]. In order to reconfigure the kube config I followed the previous steps, logging in in this case with [email protected]:

gcloud auth login

gcloud container clusters get-credentials my-cluster --region europe-west4 --project project.

When I execute kubectl get pods I get an error referencing the old org1 user:

Error from server (Forbidden): pods is forbidden: User "[email protected]" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).

What's the problem here?


Solution

  • This may not be the answer but hopefully it's part of the answer.

    gcloud container clusters get-credentials is a convenience function that mutates the local ${KUBECONFIG} (often ~/.kube/config) and populates it with cluster, context and user properties.

    I suspect (!?), your KUBECONFIG has become inconsistent.

    You should be able to edit it directly to better understand what's happening.

    There are 3 primary blocks clusters, contexts and users. You're looking to find entries (one each cluster, context, user) for your old GKE cluster and for your new GKE cluster.

    Don't delete anything

    Either back the file up first, or rename the entries.

    Each section will have a name property that reflects the GKE cluster name gke_${PROJECT}_${LOCATION}_${CLUSTER}

    It may be simply that the current-context is incorrect.

    NOTE Even though gcloud creates user entries for each cluster, these are usually identical (per user) and so you can simplify this section.

    NOTE If you always use gcloud, it does a decent job of tidying up (removing entries) too.