I'm migrating to AWS SSO for cli access, which has worked for everything except for kubectl so far. While troubleshooting it I followed a few guides, which means I ended up with some cargo-cult behaviour, and I'm obviously missing something in my mental model.
aws sts get-caller-identity
{
"UserId": "<redacted>",
"Account": "<redacted>",
"Arn": "arn:aws:sts::<redacted>:assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/<my username>"
}
kubectl get pods
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts:::assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/ is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam:::role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
It's amusing that it seems to be trying to assume the same role that it's already using, but I'm not sure how to fix it.
~/.aws/config (subset - I have other profiles, but they aren't relevant here)
[default]
region = us-east-2
output = json
[profile default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json
~/.kube/config (with clusters removed)
apiVersion: v1
contexts:
- context:
cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
user: ro
name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- foo
- --role
- arn:aws:iam::<redacted>:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
command: aws
env: null
aws-auth mapRoles snippet
- rolearn: arn:aws:iam::<redacted>:role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
username: "devread:{{SessionName}}"
groups:
- view
What obvious thing am I missing? I've reviewed the other stackoverflow posts with similar issues, but none had the arn:aws:sts:::assumed-role -> arn:aws:iam:::role path.
.aws/config had a subtle error - [profile default]
isn't meaningful, so the two blocks should have been merged into [default]
. Only the non-default profiles should have profile in the name.
[default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json
[profile rw]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadWrite
region = us-east-2
sso_region = us-east-2
output = json
I also changed .kube/config to get the token based on the profile instead of naming the role explicitly. This fixed the AssumeRole failing since it used the existing role.
apiVersion: v1
contexts:
- context:
cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
user: ro
name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- foo
- --profile
- default
command: aws
env: null
I can now run kubectl config use-context ro
or the other profiles I've defined (omitted for brevity).
On a related note, I had some trouble getting an older terraform version to work since the s3 backend didn't handle sso. aws-vault solved this for me