Kubernetes Authentication in AWS EKS Using IAM Authenticator | by Chirag Modi | Jul, 2022

How Kubernetes integrates with AWS IAM authenticator

EKS Authentication. Image from AWS

There is a different way of authentication to Kubernetes based on different cloud provider implementations. I will specifically discuss authentication implemented by AWS EKS. This article should clarify the following questions.

  • How authentication works in EKS?
  • What is AWS IAM Authenticator for Kubernetes?
  • What does “aws eks get-token” do in KubeConfig to access EKS cluster?
  • What is “aws-auth” Configmap in EKS?
  • How can I add AWS users/roles to access EKS cluster?
  • How do AWS users/roles map to Kubernetes users and groups in EKS?
  • How do I generate KubeConfig for EKS cluster?
  • How do users get authorized to perform specific Kubernetes actions?

This article covers an introduction to the AWS EKS service. You will get a detailed understanding of Kubernetes authentication implementation by AWS EKS using IAM Authenticator. It includes only high-level required details of Kubernetes authorization using RBAC.

  • Basic understanding of Kubernetes
  • Understanding of AWS IAM users and roles


As you know, there is a Control plane (Master nodes) and Data plane (Worker nodes) in any Kubernetes cluster. Control plan is responsible for managing worker nodes, and worker nodes are where application workload is deployed using Kubernetes objects like pod, deployment, services, etc.

Configuring, deploying, and managing Kubernetes is a very challenging task with the evolution of more and more cloud-native technologies. Most companies want to make their products to market quickly. They don’t want to spend time managing clusters and just focus on product delivery. They just want their developers to access the cluster to deploy their applications and get rid of management of the cluster or at least restrict managing just worker nodes where applications get deployed.

Different cloud providers offer different managed solutions where they manage the control plane and some parts of the data plane.

Elastic Kubernetes Service (EKS)

EKS is a managed solution for Kubernetes by AWS.

I am not going into details about the features of EKS. In brief, Control plane is a highly available Master node managed by AWS in EKS, so users can not access master nodes but just access Kubernetes API to deploy their workloads. There is an option for managed worker nodes by AWS or self-managed worker nodes which users can access if required.

There are multiple ways of authenticating with Kubernetes clusters like certificates, basic authentication using username and passwords, and authentication token. Here I am focusing on authentication implemented by IAM authenticator.

AWS included native support of IAM users and roles when launching EKS so IAM users and roles can be authenticated with EKS cluster. This authentication is performed by the tool called AWS IAM Authenticator.

AWS IAM Authenticator can be installed manually on any Kubernetes cluster, but it comes by default installed with EKS.

Authentication workflow

EKS Authentication Workflow: Image by Author
  • The client executes “kubectl get pods” which sends requests to Kubernetes’ API server. In addition to standard request parameters, It sends a token in the authorization header, which is a base64 encoded string of signed requests to AWS STS.
  • Kubernetes API Server sends a token to the configured IAM Authentication Server running in the Kubernetes control plane.
  • IAM Authenticator extracts the token from the request body, and base64 decodes the token and validates for URI host, path, request parameters, action, etc. Once validation is successful, It sends a signed request to AWS STS for GetCallerIdentity action.
  • AWS STS validates the signature for a request received from an IAM authenticator, then it executes GetCallerIdentity and sends a response back, including IAM identity details like users/roles information.
  • IAM Authenticator maps AWS Identities to K8s identities like users and groups based on rules configured in ConfigMap object called “aws-auth” available in kube-system namespace. It sends back K8s identities to the Kubernetes API server.
  • Kubernetes API Server authorizes received K8s users and groups from IAM authenticator against RBAC rules from Role and RoleBinding objects deployed in Kubernetes cluster. If the user is allowed to act, then the response is sent back to the kubectl client.

I hope the authentication workflow would have cleared many things, but still, you would ask how the token gets generated when we send requests using kubectl.

As you know, Kubectl uses KubeConfig to connect to the cluster, so once you create a cluster in EKS, then you need to generate KubeConfig using the following command.

aws eks update-kubeconfig --region {region} --name {cluster-name}

It will generate/update your KubeConfig file available in ~/.kube/config. Take a look at the users’ section in that file. It uses different authentication which uses the “exec” method to generate a token using the “aws eks get-token” command.

If you are using an AWS CLI version older than 1.6.156, then you will see different commands under “exec.” In this case, it’s using the “aws-iam-authenticator” client to generate an access token, so you need to download it first to access your cluster.

NOTE: you can optionally specify different AWS_PROFILE to use for authentication.

When you create an EKS cluster, It creates a default ConfigMap called “aws-auth” in the “kube-system” namespace. AWS User (admin) who has created the cluster, automatically becomes the administrator, and it will be part of the “system:masters” group in Kubernetes.

In addition to mapping users, you can also map IAM roles, so any user assuming that IAM role will be mapped to K8s user and groups configured in “aws-auth” ConfigMap.

It’s not done yet. Mapping IAM identities to K8s Identities do not give K8s identities any access to K8s objects. You need to create RBAC rules using Role and RoleBinding objects in K8s, which will authorize K8s identities to perform actions in K8s cluster.

Kubernetes authorization is a huge topic that is out of scope to cover in this article, so I will not go into details but try to cover it as part of an example use case in the next section.

OK. I got everything, but show me in action. Here you go

Let’s say I want to add a new user to my AWS EKS cluster and allow this user to just “list, modify and delete pods and deployments” in the “dev” namespace of the cluster.

As I have created an EKS cluster, I automatically get administrator rights for K8s cluster. When I run any kubectl command, KubeConfig uses the “default.”

AWS profile if not specified in KubeConfig or not set using the AWS_DEFAULT_PROFILE environment variable to execute the “exec” command in KubeConfig.

Let me first create a “developer” user with programmatic access using the AWS IAM console without any AWS permissions. Create a new AWS profile for the “developer” user using credentials from the IAM console.

aws configure --profile developer

Now, let’s take “aws-auth” ConfigMap output to YAML or you can edit the ConfigMap directly.

kubectl -n kube-system get configmap aws-auth -o yaml
kubectl -n kube-system edit configmap aws-auth

We are going to add mapping for an IAM user named “developer” to Kubernetes user called “k8s-developer” in “aws-auth” ConfigMap and deploy the ConfigMap in the K8s cluster.

$ kubectl apply -f aws-auth-configmap2.yaml
configmap/aws-auth configured

Note: Remember there is no “User” object in Kubernetes, so we can give any name we want to K8s user, and it’s not necessary to be the same as an IAM user.

We are done with mapping, and we need to give required permission to K8s user “k8s-developer” using RBAC rules, so let’s create Role and RoleBinding objects and deploy them to K8s cluster.

$ kubectl apply -f role-rolebinding.yaml created created

Let’s verify that “developer” user should be able to create and list pods but should not be able to list K8s nodes.

# switch to developer profile
$ export AWS_DEFAULT_PROFILE=developer
# create pod
$ kubectl run nginx --image=nginx --restart=Never --port=80
pod/nginx created
# list pod
$ kubectl -n dev get pods
nginx 1/1 Running 0 5s
# list k8s nodes
$ kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "k8s-developer" cannot list resource "nodes" in API group "" at the cluster scope

If you switch to “default” AWS Profile and try to list nodes, It should work fine as it’s using an IAM user named “admin,” which is mapped to the “admin” user and “system:master” group in K8s providing administrator access to K8s cluster.

I hope you would have got answers to all the questions you had at the start of the article, and things would have been pretty clear now.

If you liked the article and learned something new, leave your thoughts.

Thanks for reading!

News Credit

%d bloggers like this: