Access to an EKS cluster is granted and configured through the aws-auth
config map created by default in all EKS clusters. By default, the IAM entity (role or user) used to create the cluster is granted administrator privileges. If you want to add additional IAM entities, you need to modify the aws-auth
config map. A basic config map looks something like this –
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/my-node-role
username: system:node:{{EC2PrivateDNSName}}
Recently at Venmo we wanted to start using SSO roles to manage our EKS clusters. At first we attempted to add the full ARN of the SSO role to aws-auth
like so:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:masters
rolearn: arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/us-east-1/AWSReservedSSO_AdministratorAccess_1310ava013076dfb
username: system:masters:sso-admin
Unfortunately this didn't work. Instead of using the full ARN with the sso namespacing, you need to modify the ARN to match a traditional IAM role format. In the above example, changing the ARN to arn:aws:iam::111111111111:role/AWSReservedSSO_AdministratorAccess_1310ava013076dfb
will work.
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:masters
# Updated rolearn with the fix
rolearn: arn:aws:iam::111111111111:role/AWSReservedSSO_AdministratorAccess_1310ava013076dfb
username: system:masters:sso-admin
Additional links
