Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Sign in

Enterprise-grade security for Kubernetes - inject secrets directly into pods from Vault

A strong focus on security has always been a key part of the Banzai Cloud Pipeline platform. We incorporated Vault into our architecture early in the design process, and developed a number of supporting components so it be used easily on Kubernetes. We love what Vault enables us to do but, as with many things security-related, strengthening one part of a system exposed a weakness elsewhere. For us, that weakness was K8s secrets, which is the way in which applications usually consume secrets and credentials on Kubernetes. Any secret that is securely stored in Vault and then unsealed for consumption will eventually end up as a K8s secret, with much less protection than we’d like. K8s secrets use base64 encoding that, while perhaps better than nothing, does not satisfy our standards and probably doesn’t satisfy the standards of most enterprise clients. As a result, we’ve developed a solution wherein we can bypass the K8s secret mechanism and inject these secrets directly into pods. In our blog post from last week about Kubernetes mutating webhooks we hinted that we had developed and open sourced a solution to exactly this problem.

Vault -> Kubernetes secrets -> Pod

If you are familiar with K8s secrets, you know that these secrets are placed in etcd. When we say that we intend to bypass K8s security, we mean by not touching etcd at all. The problem with etcd is that when data is encrypted at rest, it is encrypted with a global key (see the relevant documentation). That might be a problem in a multi-tenant cluster, where independent and unrelated users could potentially gain access to the secrets of others. Also, if you already have a security team that’s operating a certified Vault installation, they’re probably not going to be happy about placing an unencrypted secret in an intermediary location.

The Banzai Cloud Pipeline platform already uses a number of Kubernetes webhooks to provide a variety of advanced features (security scans, spot instance scheduling, etc.) and we thought that injecting secrets directly into Kubernetes containers from Vault would be a good way of overcoming the base64 limitation.

Kubernetes API requests

Lets dive into how this works.

Kubernetes mutating webhook for injecting secrets

Our mutating admission webhook injects an executable into containers (in a non-intrusive way) inside Deployments/StatefulSets which can then request secrets from Vault through special environment variable definitions. This project was inspired by a number of other projects (e.g.: channable/vaultenv, hashicorp/envconsul), but is a daemonless solution.

First, the Kubernetes webhook checks if a container has environment variables with values that correspond to a specific schema. Then it reads the values for those variables directly from Vault at start-up:

1	env:
3		value: "vault:secret/data/accounts/aws#AWS_SECRET_ACCESS_KEY"

After that, the init-container is injected into the Pod, and a small binary called vault-env is attached to it as an in-memory volume. That volume is mounted to all containers with the appropriate environment variable definitions.

The init-container also changes the command of the container to run vault-env, instead of running the application directly. vault-env starts up, connects to Vault (using the Kubernetes Auth method), checks that the environment variables have a reference to a value stored in Vault (vault:secret/....) and replaces that with a corresponding value from Vault’s Secret backend. Afterward, vault-env executes the original process (with syscall.Exec()), which uses the secret that was originally stored in Vault.

Using this solution prevents Secrets stored in Vault from landing in Kubernetes Secrets (and in etcd).

vault-env was designed to work on Kubernetes, but there’s nothing stopping it from being used outside of Kubernetes as well. It can be configured with the standard Vault client’s environment variables, since there is a standard Go Vault client underneath.

Currently, the Kubernetes Service Account-based Vault authentication mechanism is used by vault-env, which requests a Vault token in return for the Service Account of the container it’s being injected into. But our implementation is going to change in order to allow the use of the Vault Agent’s Auto-Auth feature in the future. This will allow users to request tokens in init-containers with all the authentication mechanisms supported by Vault Agent, so they won’t be handcuffed to the Kubernetes Service Account-based method.

Why is this more secure than using Kubernetes secrets or using any other custom sidecar container?

Our solution is particularly lightweight and uses only existing Kubernetes constructs like annotations and environment variables. No confidential data ever persists on the disk - not even temporarily - nor in etcd. All secrets are stored in memory, and only visible to the process that requests them. If you want to make this solution even more robust, you can disable kubectl exec-ing in running containers. In this case no one can hijack injected environment variables from a process.

There is no persistent connection with Vault either, and the Vault token used to read environment variables is flushed from memory before the application starts, in order to minimize the attack surface.

Current limitations:

  • Currently supports only Vault KV 2.
  • The command of the container has to be explicitly defined in the resource definition, the container’s default ENTRYPOINT and CMD will not work (work-in-progress).

Complete example

This complete example will guide you through setting up a fully functional Vault installation with the Banzai Cloud Vault operator, and help you to create an example deployment that will be mutated by the webhook in order for the environment variables to be injected:

 1# Checkout the bank-vaults project
 3git clone
 5cd bank-vaults
 7# Install the vault-operator and create a Vault instance
 8# with it, which has the Kubernetes auth method configured
10kubectl apply -f operator/deploy/rbac.yaml
12kubectl apply -f operator/deploy/operator.yaml
14kubectl apply -f operator/deploy/cr.yaml
16# Now you have a fully functional Vault installation on top of Kubernetes,
17# orchestrated by the `banzaicloud/vault-operator` and `banzaicloud/bank-vaults`.
19# Now install the mutating webhook with Helm
21helm init
23helm repo add banzaicloud-stable
25helm upgrade --install wmwh banzaicloud-stable/vault-secrets-webhook
27# Set the Vault token from the Kubernetes secret
28# (for demonstrating purposes only)
30export VAULT_TOKEN=$(kubectl get secrets vault-unseal-keys -o jsonpath={.data.vault-root} | base64 -D)
32# Tell the CLI that Vault Cert is signed by an unknown CA
34export VAULT_SKIP_VERIFY=true
36# Tell the CLI where Vault is listening
38export VAULT_ADDR=
40# Forward the TCP connection from your Vault pod to localhost (in the background)
42kubectl port-forward vault-0 8200 &
44# Write a secret into Vault, which will be injected as an environment variable
46vault kv put secret/accounts/aws AWS_SECRET_ACCESS_KEY=s3cr3t
48# Apply the Deployment with special environment variables
49# this will be mutated by the webhook
51kubectl apply -f deploy/test-deployment.yaml

This deployment will be mutated by the webhook, since it has at least one environment variable that has a value that is a reference to a path in Vault. Here’s what the original deployment looks like:

 1apiVersion: apps/v1
 2kind: Deployment
 4  name: hello-secrets
 6  replicas: 1
 7  selector:
 8    matchLabels:
 9      app: hello-secrets
10  template:
11    metadata:
12      labels:
13        app: hello-secrets
14      annotations:
15 "https://vault:8200"
16 "default"
17 "true"
18    spec:
19      serviceAccountName: default
20      containers:
21      - name: alpine
22        image: alpine
23        command: ["sh", "-c", "echo $AWS_SECRET_ACCESS_KEY && echo going to sleep... && sleep 10000"]
24        env:
25        - name: AWS_SECRET_ACCESS_KEY
26          value: "vault:secret/data/accounts/aws#AWS_SECRET_ACCESS_KEY"

It’s going to produce Pods like this (only the relevant parts are showed, Pods are mutated directly):

 1apiVersion: v1
 2kind: Pod
 4  name: hello-secrets-575554499f-26894
 5  labels:
 6    app: hello-secrets
 7  annotations:
 8 "https://vault:8200"
 9 "default"
10 "true"
12  initContainers:
13  - name: copy-vault-env
14    command:
15    - sh
16    - -c
17    - cp /usr/local/bin/vault-env /vault/
18    image: banzaicloud/vault-env:latest
19    imagePullPolicy: IfNotPresent
20    volumeMounts:
21    - mountPath: /vault/
22      name: vault-env
23  containers:
24  - name: alpine
25    command:
26    - /vault/vault-env
27    args:
28    - sh
29    - -c
30    - echo $AWS_SECRET_ACCESS_KEY $ && echo going to sleep... && sleep 10000
31    image: alpine
32    imagePullPolicy: Always
33    env:
35      value: vault:secret/data/accounts/aws#AWS_SECRET_ACCESS_KEY
36    - name: VAULT_ADDR
37      value: https://vault:8200
38    - name: VAULT_SKIP_VERIFY
39      value: "true"
40    volumeMounts:
41    - mountPath: /vault/
42      name: vault-env
43  volumes:
44  - name: vault-env
45    emptyDir:
46      medium: Memory

As you can see, none of the original environment variables are touched in the definition, and the sensitive value of the AWS_SECRET_ACCESS_KEY variable is only visible inside the alpine container.

Extensions in the works

Currently, vault-env supports reading Values from the KV backend, but we’re planning to add support for dynamic secrets as well - database URLs with temporary usernames and passwords for batch or scheduled jobs, for example.

Another extension we’re working on is templating (transforming/combining secret values) based on the Go and Sprig templates. Make sure you check Bank-Vaults - the Vault Swiss army knife and operator for Kubernetes, and/or give us a GitHub star if you think we deserve it!

About Pipeline

Banzai Cloud’s Pipeline provides a platform which allows enterprises to develop, deploy and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures—multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, etc.—are a tier zero feature of the Pipeline platform, which we strive to automate and enable for all enterprises.

If you’re interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:


comments powered by Disqus