Search code examples
azure-devopsterraformamazon-ekscredentialsrbac

Accessing EKS cluster which created through Azure Devops Pipeline


I am trying to access an EKS cluster which was created earlier with Terraform through Azure Devops pipeline for testing purposes. This Pipeline runs on an agent in AWS which is not publicly available to ssh in. When I try to access the cluster I get "kubectl error: you must be logged in to the server (unauthorized)".

I undertand that When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.

I am a federated user assumes admin role in the AWS account. Is there a way to add my role credentials to the cluster to allow access?

AWS STS get caller identity returns my creds as below.

UserId:******

Account:*****

Arn:arn:aws:sts:************:assumed-role/admins/{accountname}

Or if i want to re-create the Cluster same way with Terraform through a pipeline, which credentials should I add to the configuration to be able to access with my current role since I am not able to create a new User.

Any help would be greatly appreciated.


Solution

  • As you rightly mentioned, only the IAM principal used to provision the cluster has access initially. If you want to access the cluster using your own credentials, you need to add those to the aws-auth ConfigMap during the provisioning process. You can see an example of how to do this here. If you don't want to use the eks-auth module, you can try this approach, https://dev.to/fukubaka0825/manage-eks-aws-auth-configmap-with-terraform-4ndp, instead. You basically need to add a mapRole entry that maps the AWS role assumed by your federated user to a Kubernetes group, e.g. cluster-admin. For additional information about the aws-auth ConfigMap see https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html.