Banzai Cloud Pipeline helps you use your infrastructure securely by scanning the images used by your workloads for vulnerabilities. When you create a cluster with the security functions enabled, Pipeline deploys the resources necessary to enforce your active security policy — for example to reject the creation of vulnerble pods. You can choose a predefined policy or compose your own.

The security scan functionality provided by Pipeline relies on Anchore Image Validator which is an admission server that registers itself in the Kubernetes cluster as a validating webhook thus it will validate any Pod deployed to the cluster. The admission server inspects the images that are defined in PodSpec using the Anchore Engine endpoint running as part of Banzai Cloud Pipeline. Based on that response, the admission hook can decide whether to accept or reject that deployment.

Key aspects of container image vulnerability scans

  • Every image should be scanned no matter where it comes from (i.e: deployment, operator, CI/CD, etc.)
  • It should be possible to set up policies with certain rules to allow or reject a pod
  • These policies should be associated with clusters
  • If a policy result is rejected, creation of the pod should be blocked
  • There should be an easy way to whitelist a Helm deployment

How it works

Pipeline — behind the scenes — automates the following steps:

  • Generate a technical user in Anchore for each cluster created
  • Save the generated credentials to Vault — Pipeline’s main secret store
  • Setup the predefined Policy bundles
  • Deploy the Validating Admission Webhook configured with the previously created credentials

Pipeline provides a RESTful API for the management of policies and the inspection of scan results.

Predefined policy bundles

To simplify bootstrapping, we have predefined basic policy bundles for Anchore:

  • Allow all This policy is the most permissive. One can deploy anything, but it receives feedback about all the deployed images
  • Reject Critical This policy will prevent deploying containers with critical CVE
  • Reject High This policy will prevent deploying containers with high severity CVE
  • Block root This policy will prevent deploying containers with apps running root privileges
  • Deny all This is the most restrictive policy. Only explicitly whitelisted releases are accepted

Example: creating a pod while the deny all policy is active

$ banzai cluster shell --cluster-name image-scan
[image-scan]$ kubectl run --generator=run-pod/v1 busybox1 --image=busybox -- sleep 3600
Error from server: admission webhook "" denied the request: Image failed policy check: busybox

Whitelisting deployments

Our approach to filtering is based on Helm Deployments. However, covering whitelists at the deployment level with CVE or image names is simply not feasible. To manage whitelisted deployments we use a custom resource definition, so the admission hook will accept deployments that match any whitelist element no matter what the scan result is.

Note: All resources included in a Helm Deployment must have the release-name label.

The CRD structure should include the following data:

  • Name Name of the whitelisted release
  • creator The Pipeline user who created the rule
  • reason Reason for whitelisting
  • [regexp] Optional regular expression

Example: whitelist an image

To whitelist the image, you can submit a WhiteListItem resource to the cluster, or alternatively, use the API or the web UI.

$ banzai cluster shell --cluster-name=image-scan -- kubectl apply -f - << EOF
kind:  WhiteListItem
  name: busybox1
  reason: testing
  creator: pbalogh-sa

You will see that the image is no longer denied:

$ kubectl run --generator=run-pod/v1 busybox1 --image=busybox -- sleep 3600
$ kubectl get pods
busybox1   1/1     Running   0          4s

Scan Events (Audit logs)

Finding the result of an admission hook decision can be troublesome, so we introduced the Audit custom resource. With this custom resource it’s easy to track the result of each scan. Instead of searching in events, you can also easily filter these resources with kubectl. The CRD structure includes the following data:

  • releaseName Scanned release
  • resource Scanned resource (Pod)
  • image Scanned images (in Pod)
  • result Scan results (per image)
  • action Admission action (allow, reject)
$ kubectl describe audits busybox1
Name:         busybox1
Labels:       fakerelease=true
Annotations:  <none>
API Version:
Kind:         Audit
  Creation Timestamp:  2019-09-16T08:25:39Z
  Generation:          3
  Resource Version:    5670
  Self Link:           /apis/
  UID:                 9093b50f-d85b-11e9-8f41-42010a8a01a2
  Action:  allowed
    Image Digest:  sha256:dd97a3fe6d721c5cf03abac0f50e2848dc583f7c4e41bf39102ceb42edfd1808
    Image Name:    busybox
    Image Tag:     latest
    Last Updated:  2019-09-16T08:35:24Z
  Release Name:    busybox1
  Resource:        Pod
    Image failed policy check: busybox
Events:   <none>

During image scans, Admission server logs result to and set their ownerReferences to the scanned Pod’s parent. This provides us with a compact overview of the resources running on the cluster. Because these events are bound to Kubernetes resources, it allows for the cluster to clean them up when the original resource (pod) is no longer present.