Banzai Cloud Logo Close
Home Benefits Blog Company Contact
Sign in
Author Sebastian Toader

Authenticating with Service Account Tokens on RBAC enabled GKE clusters


As of version 1.6, Kubernetes provides role-based access control (RBAC) so that administrators can set up fine-grained access to a variety of Kubernetes resources. It would take too long to fully explain why it makes sense to use RBAC in this post, but, in a nutshell, RBAC provides a level of control that most enterprises need to meet their security requirements within Kubernetes clusters.

Processes and human operators that assume the identity of a Kubernetes Service Account will authenticate with said account and gain its associated access rights. An administrator can grant access rights to a service account by using RBAC to control the Kubernetes resources it operates on, and the actions it can carry out inside said resources.

In-cluster processes

In-cluster processes are processes that run inside Pods. When interacting with the Kubernetes API Server, these processes use the service account specified in their pod definition for authentication. If no service account is specified, then they use the default account.

External processes

These are processes, e.g. automation tools, running outside of the Kubernetes cluster. In order to interact with the Kubernetes API server they need to assume the identity of a service account through which they will be granted the necessary access to Kubernetes resources. External process can use a Kubernetes service account for authentication via service account tokens. Apart from the difference in the authentication, external processes using a service account are authorized the same way as in-cluster processes.

Human operators

Human operators may execute operations on an RBAC-enabled GKE cluster using the Google Console, or Google Cloud CLI. It is, however, not unheard of for human operators to prefer to use a simple kubectl command that authenticates as a service account, rather than use their Google Cloud IAM. Just like with external processes, this kind of authentication can be achieved with the use of service account tokens.

How does Pipeline do it?

Banzai Cloud’s Pipeline provisions RBAC-enabled GKE clusters for users, generates a service account with cluster admin privileges, and uses the account to configure and deploy those components that, together, make up the features provided by Pipeline — such as out-of-the-box monitoring, centralized logging, spot/preemptible scheduling, security scans and backups, just to name a few. It also generates a kubeconfig that human operators can use with kubectl for authentication via the same service account.

Pipeline requires a Google Cloud credential of the service account-type (note: this is not a Kubernetes service account) for engaging Google’s API to perform GKE cluster CRUD operations. These credentials are stored securely in Vault.

Once the GKE cluster is created, it starts deploying various components, which interact with the Kubernetes API Server via the kubernetes/client-go library, behind the scenes. This library requires Kubernetes credentials in order for them to interact with the Kubernetes API Server. But, at this point, all we’ll have is a Google Cloud identity of the service account variety, which will look something like this:

 2  "type": "service_account",
 3  "project_id": "<your-google-cloudproject-id>",
 4  "private_key_id": "....",
 5  "private_key": "....",
 6  "client_email": "<some-name>@<your-google-cloudproject-id>",
 7  "client_id": "...",
 8  "auth_uri": "",
 9  "token_uri": "",
10  "auth_provider_x509_cert_url": "",
11  "client_x509_cert_url": "<some-name>%40<your-google-cloudproject-id>"

The question, now, is how to make it so that Pipeline passes this credential to the Kubernetes client-go library for authentication, when the client-go is expecting a Kubernetes credential. The answer is to use the Kubernetes client GCP authenticator plugin.

When Pipeline uses the GCP authenticator, it asks for a short-lived authentication token for the purposes of authenticating to the Kubernetes API Server of a GKE cluster.

 2googleCredentials, err := google.CredentialsFromJSON(ctx, googleServiceAccountJSON, "")
 3if err != nil {
 4    return "", err
 6ctx := context.Background()
 7credentialsClient, err :=
 8    credentials.NewIamCredentialsClient(ctx, option.WithCredentials(googleCredentials))
10if err != nil {
11	eturn nil, err
14defer credentialsClient.Close()
16// requires Service Account Token Creator and Service Account User IAM roles
17req := credentialspb.GenerateAccessTokenRequest{
18    Name:     fmt.Sprintf("projects/-/serviceAccounts/%s", serviceAccountEmailOrId),
19    Lifetime: &duration.Duration{
20			Seconds: 600, // token expires after 10 mins
21		},
22    Scope: []string{
23        "",
24        "",
25    },
28tokenResp, err := credentialsClient.GenerateAccessToken(ctx, &req)
29if err != nil {
30    return nil, err
33return tokenResp // 

The resultant GenerateAccessTokenResponse has an AccessToken field, which contains an OAuth 2.0 access token that we can use for authentication with the Kubernetes API server. Note that the requested token in the above snippet will expire after 10 minutes.

 1// requires Service Account Token Creator and Service Account User IAM roles
 2req := credentialspb.GenerateAccessTokenRequest{
 3    Name:     fmt.Sprintf("projects/-/serviceAccounts/%s", serviceAccountEmailOrId),
 4    Lifetime: &duration.Duration{
 5			Seconds: 600, // token expires after 10 mins
 6		},
 7    Scope: []string{
 8        "",
 9        "",
10    },

For access to the above mentioned Google API calls, the Google service account requires Service Account Token Creator and Service Account User roles.

Also, Google projects require access to use the IAM Service Account Credentials API API service to be enabled

We can now construct the Kubernetes client configuration, using our OAuth 2.0 token:

 2tokenExpiry := time.Unix(tokenResp.GetExpireTime().GetSeconds(), int64(tokenResp.GetExpireTime().GetNanos()))
 4// kubernetes config using GCP authenticator
 5config := &rest.Config{
 6    Host: host,  // API Server address
 7    TLSClientConfig: rest.TLSClientConfig{
 8        CAData: capem, // API Server CA Cert
 9    },
10    AuthProvider: &api.AuthProviderConfig{
11        Name: "gcp",
12        Config: map[string]string{
13            "access-token": tokenResp.GetAccessToken(),
14            "expiry":       tokenExpiry.Format(time.RFC3339Nano),
15        },
16    },
18clientset, err := kubernetes.NewForConfig(config)
19if err != nil {
20    return err

Note the AuthProviderConfig config literal, here, through which we can specify the type of authenticator to be used — gcp — and its access-token and expiry configuration fields.

With this Kubernetes client config, Pipeline can interact with the API Server and create a Kubernetes service account:

 2serviceAccount := &v1.ServiceAccount{
 3    ObjectMeta: metav1.ObjectMeta{
 4        Name: "cluster-admin-sa",
 5    },
 8_, err := clientset.CoreV1().ServiceAccounts("default").Create(serviceAccount)
 9if err != nil && !errors.IsAlreadyExists(err) {
10    return err

After that, we create a cluster role that grants cluster admin privileges:

 2clusterAdmin := "cluster-admin"
 3adminRole := &v1beta1.ClusterRole{
 4    ObjectMeta: metav1.ObjectMeta{
 5        Name: clusterAdmin,
 6    },
 7    Rules: []v1beta1.PolicyRule{
 8        {
 9            APIGroups: []string{"*"},
10            Resources: []string{"*"},
11            Verbs:     []string{"*"},
12        },
13        {
14            NonResourceURLs: []string{"*"},
15            Verbs:           []string{"*"},
16        },
17    },
19clusterAdminRole, err := clientset.RbacV1beta1().ClusterRoles().Get(clusterAdmin, metav1.GetOptions{})
20if err != nil {
21    clusterAdminRole, err = clientset.RbacV1beta1().ClusterRoles().Create(adminRole)
22    if err != nil {
23        return err
24    }

Bind the cluster role to a service account:

 2clusterRoleBinding := &v1beta1.ClusterRoleBinding{
 3    ObjectMeta: metav1.ObjectMeta{
 4        Name: "cluster-admin-sa-clusterRoleBinding",
 5    },
 6    Subjects: []v1beta1.Subject{
 7        {
 8            Kind:      "ServiceAccount",
 9            Name:      serviceAccount.Name,
10            Namespace: "default",
11            APIGroup:  v1.GroupName,
12        },
13    },
14    RoleRef: v1beta1.RoleRef{
15        Kind:     "ClusterRole",
16        Name:     clusterAdminRole.Name,
17        APIGroup: v1beta1.GroupName,
18    },
20if _, err = clientset.RbacV1beta1().ClusterRoleBindings().Create(clusterRoleBinding); err != nil && !k8sErrors.IsAlreadyExists(err) {
21    return err

Now that a Kubernetes service account has been created with the cluster admin role, Pipeline will generate a Kubernetes config JSON that users can download to use with kubectl to authenticate as a cluster admin. For this, Pipeline has to pull the access token of the Kubernetes service account.

Kubernetes creates long-lived access tokens for service accounts, and stores them as Kubernetes secrets. The names of these secrets can be found in the ServiceAccount resources.

 2if serviceAccount, err = clientset.CoreV1().ServiceAccounts("default").Get("cluster-admin-sa", metav1.GetOptions{}); err != nil {
 3    return err
 6if len(serviceAccount.Secrets) > 0 {
 7    secret := serviceAccount.Secrets[0]
 8    secretObj, err := clientset.CoreV1().Secrets(defaultNamespace).Get(secret.Name, metav1.GetOptions{})
 9    if err != nil {
10        return err
11    }
12    if token, ok := secretObj.Data["token"]; ok {
13        return string(token), nil
14    }

The generated Kubernetes client config JSON:

 1apiVersion: v1
 5- context:
 6  ...
 8- name: <user-name>
 9  user:
10    token: <the value taken from the 'token' field of the secret of the service account>
12kind: Config

Additionaly, less privileged Kubernetes service accounts can be created in a similar way in order to grant limited access to GKE clusters.

The following diagram shows the flow described above:

Pipeline GKE RBAC

About Pipeline

Banzai Cloud’s Pipeline provides a platform which allows enterprises to develop, deploy and scale container-based applications in multi-cloud environments. Pipeline leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, autoscaling, backups and restores, vulnerability scans, static code analysis, CI/CD, etc. — are a tier zero feature of the Pipeline platform, which we strive to automate and enable for all enterprises.

If you’re interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:




comments powered by Disqus