Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Sign in
Author Peter Balogh

Provider agnostic authentication and authorization in Kubernetes

The Banzai Cloud Pipeline platform allows enterprises to develop, deploy and scale container-based applications on six cloud providers, using multiple Kubernetes distributions. One significant difference between cloud provider managed Kubernetes (we support ACSK, EKS, AKS, GKE, DO and OKE) and our own Banzai Cloud Pipeline Kubernetes Engine is the ability for us to access the Kubernetes API server and to configure it.

Our enterprise customers demand for the same standards whether they are using Banzai Cloud’s PKE distribution in a hybrid environment or a cloud provider managed Kubernetes - the ability to authenticate and authorize (e.g.from LDAP, Active Directory or any other provider as GitHub, GitLab, Google, etc) using a unified and provider agnostic way.

This architecture yields them the same strong security measures, multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, etc. whether this is a managed environment or the PKE - all through Pipeline.


  • The Banzai Cloud Pipeline platform can spin up clusters on 6 cloud providers
  • Enterprises would like to use their own LDAP or AD to authenticate and authorize users in a cloud agnostic way
  • Cloud provider managed Kubernetes does not allow customizations of the K8s API server
  • We use Dex and dynamically plug-in multiple backends
  • Banzai Cloud open-sourced JWT-to-RBAC to automatically generate RBAC resources based on JWT tokens

JWT-to-RBAC Flow

In order to understand the differences and the available options lets go first through the available methods to authenticate and authorize in Kubernetes.


In a Kubernetes cluster there are quite a few options for authentication:

  • X509 client certificates
  • Static token file
  • Bootstrap tokens
  • Static password file
  • Service account tokens
  • OpenID Connect tokens
  • Webhook token authentication
  • Authenticating proxy

Regardless if you manage your own Kubernetes cluster or use the Banzai Cloud Pipeline Kubernetes Engine, you have unrestricted control over the API server, so any of the above authentication methods will work.

X509 client certificates: Client certificate authentication is enabled by passing the --client-ca-file=cacertfile option to the API server. This is the most commonly used option with kubectl for user authentication.

Static token file: The API server reads bearer tokens from a file when given --token-auth-file=tokenfile flag in the command line.

Bootstrap Tokens: To allow for streamlined bootstrapping of new clusters; the PKE deployment process backed by Pipeline uses them as well.

Static password file: Basic authentication is enabled by passing the --basic-auth-file=authfile option to API server.

Service account tokens: A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests (we will come back with more details later).

Authenticating proxy: The API server can be configured to identify users from request header values, such as X-Remote-User.

This article uses LDAP based authentication as an example

OpenID Connect tokens

The most trivial way to enable OAuth token based authentication in a Kubernetes cluster is running the API server with some special flags.


You can read more about OpenID Connect Tokens.

Webhook token authentication

The other preferred way to use OAuth authentication is webhook token authentication. Like in case of OpenID connect tokens, this requires running the API server with special flags as well.


The config file provided for the API server is similar in structure to a Kubeconfig file used by client tools like kubectl, and contains all the details that allow the API server to process the users’ tokens.

 1# Kubernetes API version
 2apiVersion: v1
 3# kind of the API object
 4kind: Config
 5# clusters refers to the remote service.
 7  - name: name-of-authn-service
 8    cluster:
 9      # CA for verifying the remote service.
10      certificate-authority: /path/to/ca.pem
11      # URL of remote service to query. Must use 'https'.
12      server: https://authn-service/authenticate
14# users refers to the API server's webhook configuration.
16  - name: name-of-api-server
17    user:
18      client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
19      client-key: /path/to/key.pem          # key matching the cert
21# kubeconfig files require a context. Provide one for the API server.
22current-context: webhook
24- context:
25    cluster: name-of-authn-service
26    user: name-of-api-sever
27  name: webhook

If you are interested in details, check the official documentation.


Authentication itself does not let you do anything, it just verifies you are who you claim to be. After a successful authentication the Kubernetes cluster also needs to validate that you are entitled to execute the action you are trying to. This is called authorization, or authz in short. There are four authorization modules in Kubernetes:

  • node - Authorizes API requests made by kubelets
  • ABAC - Attribute-based access control (ABAC was the main authorization module before RBAC)
  • RBAC - Role-based access control
  • Webhook - HTTP callback

Webhook mode

If you would like to use OAuth provided JWT tokens for authorization then the webhook module is a good choice. Like usual, this can be configured in the API server by running it with certain flags.

 1# Kubernetes API version
 2apiVersion: v1
 3# kind of the API object
 4kind: Config
 5# clusters refers to the remote service.
 7  - name: name-of-authz-service
 8    cluster:
 9      # CA for verifying the remote service.
10      certificate-authority: /path/to/ca.pem
11      # URL of remote service to query. Must use 'https'. May not include parameters.
12      server: https://authz-service/authorize
14# users refers to the API Server's webhook configuration.
16  - name: name-of-api-server
17    user:
18      client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
19      client-key: /path/to/key.pem          # key matching the cert
21# kubeconfig files require a context. Provide one for the API Server.
22current-context: webhook
24- context:
25    cluster: name-of-authz-service
26    user: name-of-api-server
27  name: webhook

Role-based access control

You can enable RBAC authorization mode in a Kubernetes clusters, but it’s usually enabled by default. With this module K8s provides us objects, which are of the base of authorization decisions. These objects are stored in etcd just like other Kubernetes resources.


  • Role
  • RoleBinding
  • ClusterRole
  • ClusterRoleBinding

Role and ClusterRole

A role contains rules that represent a set of permissions. A role can be defined within a namespace with a Role, or cluster-wide with a ClusterRole.

1kind: ClusterRole
4  name: example-clusterrole
6- apiGroups: [ "", "extensions", "apps" ]
7  resources: [ "deployments", "replicasets", "pods" ]
8  verbs: ["get", "list"]

RoleBinding and ClusterRoleBinding

A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts) and a reference to the role being granted. Permissions can be granted within a namespace with a RoleBinding, or cluster-wide with a ClusterRoleBinding.

 1kind: ClusterRoleBinding
 4  name: example-clusterrole-binding
 6- kind: Group
 7  name: example-group
 8  apiGroup:
10  kind: ClusterRole
11  name: example-clusterrole
12  apiGroup:

Subjects can be

  • Users
  • Groups
  • ServiceAccounts

Users are human users, represented as strings. The group information is provided by the Authenticator modules. Groups, like users, are represented as strings. Groups have no format requirements, other than that the prefix system: is reserved. ServiceAccounts have usernames with the system:serviceaccount: prefix and belong to groups with the system:serviceaccounts: prefix.

Read more about authorization modules.

What about cloud provider managed Kubernetes

Now that we have a fair understanding of Kubernetes security, lets switch back to the original problem - how can we authenticate and authorize across all cloud providers and any Kubernetes distribution.

So all these options above all need access and the ability to configure the API server. Damn it … right?

The Banzai Cloud Pipeline platform

We have always been committed to supporting Kubernetes and our container based application platform on all major providers, however, we are also committed to making portability between cloud vendors easy, seamless and automated.

Accordingly, this post will highlight a few important aspects of a multi-cloud approach we learned from our users, and the open source code we developed and made part of the Pipeline platform.

We’ve been trying to find a solution which works across all providers and still gives our enterprise customers the confidence to use their own LDAP or AD.

Note that we support other authentication providers as GitHub, Google, GitLab, etc.

Within the Kubernetes cluster, we use service account tokens for authentication so the related ServiceAccount has to be created before we authenticate and use this token in K8s.

Automatically create ServiceAccounts based on LDAP in a managed K8s

For authentication we use Dex with the LDAP connector. The user in LDAP has group memberships and Dex issues a JWT token containing these memberships. Our open source JWT-to-RBAC project can create ServiceAccount, ClusterRoles and ClusterroleBindings based on JWT tokens. When we create a new ServiceAccount K8s automatically generates a service account token - as we discussed earlier and the JWT-to-RBAC retrieves it.

JWT-to-RBAC Flow


There are some pre-requirements to kick this of for your own testing.

  • Configured Dex server which issues JWT tokens. If you want to issue tokens with Dex you have to configure it with the LDAP connector. You can use the Banzai Cloud Dex chart.
  • Configured LDAP server - you can use the openldap docker image
  • Authentication application which uses Dex as an OpenID connector.

Dex acts as a shim between a client app and the upstream identity provider. The client only needs to understand OpenID Connect to query Dex.

The whole process is broken down to two main parts:

  • Dex auth flow
  • jwt-to-rbac ServiceAccount creation flow

Dex authentication flow:

  1. User visits Authentication App.
  2. Authentication App redirects user to Dex with an OAuth2 request.
  3. Dex determines the user’s identity by looking up the configured upstream identity provider (in this case, LDAP).
  4. Dex redirects user to Authentication App with a signed code.
  5. Authentication App exchanges code with Dex for an access token.

jwt-to-rbac Flow:

  1. Authentication App has an ID token (JWT)
  2. POST ID token to jwt-to-rbac App
  3. jwt-to-rbac validates ID token with Dex
  4. jwt-to-rbac extracts username, groups and so on from the token
  5. jwt-to-rbac calls the API server to create ServiceAccount, ClusterRoles and ClusterRoleBindings
  6. jwt-to-rbac gets ServiceAccount token and sends it to Authentication App
  7. Authentication App sends back the service account token to User
  8. User authenticates on Kubernetes using the service account token

The access token issued by Dex has the following content:

 2  "iss": "http://dex/dex",
 3  "sub": "CiNjbj1qYW5lLG91PVBlb3BsZSxkYz1leGFtcGxlLGRjPW9yZxIEbGRhcA",
 4  "aud": "example-app",
 5  "exp": 1549661603,
 6  "iat": 1549575203,
 7  "at_hash": "_L5EkeNocRsG7iuUG-pPpQ",
 8  "email": "",
 9  "email_verified": true,
10  "groups": [
11    "admins",
12    "developers"
13  ],
14  "name": "jane",
15  "federated_claims": {
16    "connector_id": "ldap",
17    "user_id": "cn=jane,ou=People,dc=example,dc=org"
18  }

After jwt-to-rbac extracts the information from the token, it creates a ServiceAccount and a ClusterRoleBinding using one of the default K8s ClusterRoles as roleRef, or generates one defined in the configuration if it doesn’t exist yet.

Default K8s ClusterRoles used by jwt-to-rbac

The JWT-to-RBAC does not create a new ClusterRole in every case; for example if a user is a member of admin group, it doesn’t create this ClusterRole because K8s has already one by default.

Default ClusterRole Description
cluster-admin Allows super-user access to perform any action on any resource.
admin Allows admin access, intended to be granted within a namespace using a RoleBinding.
edit Allows read/write access to most objects in a namespace.
view Allows read-only access to see most objects in a namespace.

jwt-to-rbac create custom ClusterRole defined in config

In most of the cases there are different LDAP groups, so custom groups can be configured with custom rules.

 2groupName = "developers"
 4verbs = [
 5  "get",
 6  "list"
 8resources = [
 9  "deployments",
10  "replicasets",
11  "pods"
13apiGroups = [
14  "",
15  "extensions",
16  "apps"

So to conclude on the open source JWT-to-RBAC project - follow these steps if you would like to try it or check it out already in action by subscribing to our free developer beta at

1. Deploy jwt-to-rbac to Kubernetes

After cloning the GitHub repository you can compile the code and create a Docker image with a single command:

make docker

If you use docker-for-desktop or minikube, you’ll be able to easily deploy the solution to it locally with the newly built image.

kubectl create -f deploy/rbac.yaml
kubectl create -f deploy/configmap.yaml
kubectl create -f deploy/deployment.yaml
kubectl create -f deploy/service.yaml
# port-forward locally
kubectl port-forward svc/jwt-to-rbac 5555

Now you can communicate with the jwt-to-rbac app.

2. POST the access token issued by Dex to jwt-to-rbac API

curl --request POST \
  --url http://localhost:5555/rbac \
  --header 'Content-Type: application/json' \
  --data '{\n	"token": "example.jwt.token"\n}'

# response:
    "Email": "",
    "Groups": [
    "FederatedClaimas": {
        "connector_id": "ldap",
        "user_id": "cn=jane,ou=People,dc=example,dc=org"

The ServiceAccount, ClusterRoles (if the access token contains custom groups we as mentioned earlier) and ClusterRoleBindings are created.

Listing the created K8s resources:

curl --request GET \
  --url http://localhost:5555/rbac \
  --header 'Content-Type: application/json'

    "sa_list": [
    "crole_list": [
    "crolebind_list": [

3. GET the default K8s token of ServiceAccount

curl --request GET \
  --url http://localhost:5555/tokens/janedoe-example-com \
  --header 'Content-Type: application/json'

# response:
        "name": "janedoe-example-com-token-m4gbj",
        "data": {
            "ca.crt": "example-ca-cer-base64",
            "namespace": "ZGVmYXVsdA==",
            "token": "example-k8s-sa-token-base64"


4. Generate a ServiceAccount token with TTL

curl --request POST \
  --url http://localhost:5555/tokens/janedoe-example-com \
  --header 'Content-Type: application/json'
  --data '{\n"duration": "12h30m"\n}'

# response:
        "name": "janedoe-example-com-token-df3re",
        "data": {
            "ca.crt": "example-ca-cer-base64",
            "namespace": "ZGVmYXVsdA==",
            "token": "example-k8s-sa-token-with-ttl-base64"

Now you have a base64 encoded service account token.

5. Accessing K8s with the ServiceAccount token

You can use service account token from command line:

kubectl --token $TOKEN_TEST --server $APISERVER get po

Or create kubectl context with it:

TOKEN=$(echo "example-k8s-sa-token-base64" | base64 -D)
kubectl config set-credentials "janedoe-example-com" --token=$TOKEN
# with kubectl config get-clusters you can get cluster name
kubectl config set-context "janedoe-example-com-context" --cluster="clustername" --user="janedoe-example-com" --namespace=default
kubectl config use-context janedoe-example-com-context
kubectl get pod

As a final note - since we use Dex, which is an identity service that uses OpenID Connect to drive authentication for other apps, any other supported connector can be used for authentication to Kubernetes.

About Pipeline

Banzai Cloud’s Pipeline provides a platform which allows enterprises to develop, deploy and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures—multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, etc.—are a tier zero feature of the Pipeline platform, which we strive to automate and enable for all enterprises.

If you’re interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:



comments powered by Disqus