Placeholder image

Toader Sebastian

Mon, Jun 11, 2018

Vertical pod autoscaler

At Banzai Cloud we provision all kind of applications to Kubernetes and we try to autoscale these clusters and/or properly size the application resource needs. As promised in an earlier blog post How to size correctly containers for Java 10 applications we’ll share our findings on the Vertical Pod Autoscaler(VPA) used with Java 10 applications.

VPA sets resource requests on the containers of a pod automatically based on historical usage and thus ensuring that pods are scheduled onto nodes where appropriate resource amount is available for each pod.

Kubernetes supports three different kind of autoscalers - cluster, horizontal and vertical one. This post is part of our autoscaling series:
Autoscaling Kubernetes clusters
Vertical pod autoscaler

For an overview of the autoscaling flow please check this (static) diagram. For further information and a dynamic version of the vertical autoscaling flow read on.

Vertical Pod Autoscaler

Prerequisites for using VPA

  • VPA requires MutatingAdmissionWebhooks to be enabled on the Kubernetes cluster. This can be verified quickly by:

        $ kubectl api-versions | grep admissionregistration

    As of Kubernetes version 1.9 MutatingAdmissionWebhooks is enabled by default. If your cluster doesn’t have it enabled follow these instructions.

  • Install the components that VPA comprises of by following the installation guide. If installing VPA succeeded you should see something like:

        $ kubectl get po -n kube-system
        NAME                                         READY     STATUS    RESTARTS   AGE
        vpa-admission-controller-7b449b69c-rrs5p     1/1       Running   0          1m
        vpa-recommender-bf6577cdd-zm7rf              1/1       Running   0          1m
        vpa-updater-5dd9968676-gm28g                 1/1       Running   0          1m
        $ kubectl get crd
        NAME                                                      AGE   1m             1m

    As stated in the documentation VPA pulls resource usage metrics related to pods and containers from Prometheus. VPA Recommender is the component that gathers metrics from Prometheus and comes up with recommendations for the watched pods. In the current implementation VPA Recommender expects Prometheus Server to be reachable at a specific location: http://prometheus.monitoring.svc. For details see the Dockerfile of VPA Recommender. Since this is work in progress I expect this will be made configurable in the future.

Note: we do effortless monitoring of Java applications deployed to Kubernetes without code changes

As we can see **Prometheus Server** must be deployed into `monitoring` [namespace]( and there must be a [Kubernetes service]( named `prometheus` pointing to it.

    $ helm init -c
    $ helm repo list
      NAME                    URL                                                   
    $ helm install --name prometheus  --namespace monitoring  stable/prometheus
    kubectl create -f - <<EOF
    apiVersion: v1
    kind: Service
        app: prometheus
        chart: prometheus-6.6.1
        component: server
        heritage: Tiller
        release: prometheus
      name: prometheus
      namespace: monitoring
      - name: http
        port: 80
        protocol: TCP
        targetPort: 9090
        app: prometheus
        component: server
        release: prometheus
      sessionAffinity: None
      type: ClusterIP

Configuring VPA

Once VPA is up and running we need to configure it. A VPA configuration contains the following settings:

  1. label selector, through which it identifies the Pods it should handle
  2. optional update policy, configures how VPA applies resource related changes to Pods. If not specified the default Auto is used.
  3. optional resource policy, configures how the recommender computes the recommended resources for the Pod. If not specified the default is used.

Let’s see this in action

For a dynamic overview of how the vertical cluster autoscaler work please check this diagram below:

Vertical Pod Autoscaler

We’re going to use the same test application as in How to size correctly containers for Java 10 applications. We deploy the test application using:

$ kubectl create -f - <<EOF
  apiVersion: apps/v1
  kind: Deployment
    name: dyn-class-gen-deployment
      app: dyn-class-gen
    replicas: 1
        app: dyn-class-gen
          app: dyn-class-gen
          - name: dyn-class-gen-container
            image: banzaicloud/dynclassgen:1.0
            - name: DYN_CLASS_COUNT
              value: "256"
            - name: MEM_USAGE_PER_OBJECT_MB
              value: "1"
                memory: "64Mi"
                cpu: 1
                memory: "1Gi"
                cpu: 2

$ deployment "dyn-class-gen-deployment" created

The container’s upper memory limit is set to 1GB the max heap size of application will be automatically set to 1GB / 4 = 256MB. The 256MB of max heap size is clearly not enough as the application tries to consume 256 * 1MB of heap space plus needs space for the internal objects of loaded libraries etc. Thus we expect to see the application quit due to java.lang.OutOfMemoryError.

$ kubectl get po
NAME                                        READY     STATUS    RESTARTS   AGE
dyn-class-gen-deployment-5c75c8c555-gzcdq   0/1       Error     2          24s
kubectl logs dyn-class-gen-deployment-7f4f95b94b-cbrx6

DynClassBase243 instance consuming 1MB
DynClassBase244 instance consuming 1MB
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
        at com.banzaicloud.dynclassgen.DynClassBase245.consumeSomeMemory(
        at com.banzaicloud.dynclassgen.DynamicClassGen.main(

Now let’s see what VPA would do to our pod failing due to java.lang.OutOfMemoryError. We have to configure VPA first to find our pod.

$ kubectl create -f - <<EOF
kind: VerticalPodAutoscaler
  name: dyn-class-gen-vpa
      app: dyn-class-gen
    updateMode: "Auto"

verticalpodautoscaler "dyn-class-gen-vpa" created

After waiting some time than checking the logs of VPA Recommender we can see that it doesn’t provide any recommendation for our dyn-class-gen-vpa pod. My educated guess is that the pod is failing too quickly thus Prometheus is unable to collect valuable data on resource usage of the pod which means there is not enough input data for VPA Recommender to be able to come up with a recommendation for our pod.

Let’s modify the pod such as it’s not failing with java.lang.OutOfMemoryError by increasing the upper limit of the heap to 300MB :

$ kubectl edit deployment dyn-class-gen-deployment
  - env:
    - name: DYN_CLASS_COUNT
      value: "256"
    - name: JVM_OPTS
      value: -Xmx300M
      value: "1"

Letting our pod running for some time let’s see what VPA Recommender tells us now:

$ kubectl get VerticalPodAutoscaler dyn-class-gen-vpa -o yaml

kind: VerticalPodAutoscaler
  clusterName: ""
  creationTimestamp: 2018-06-05T19:36:09Z
  generation: 0
  name: dyn-class-gen-vpa
  namespace: default
  resourceVersion: "48550"
  selfLink: /apis/
  uid: b238081d-68f7-11e8-973e-42010a800fe7
      app: dyn-class-gen
    updateMode: Auto
  - lastTransitionTime: 2018-06-05T19:36:22Z
    status: "True"
    type: Configured
  - lastTransitionTime: 2018-06-05T19:36:22Z
    status: "True"
    type: RecommendationProvided
  lastUpdateTime: 2018-06-06T06:26:43Z
    - maxRecommended:
        cpu: 4806m
        memory: "12344993833"
        cpu: 241m
        memory: "619256043"
      name: dyn-class-gen-container
        cpu: 250m
        memory: "642037204"

The VPA recommender recommends:

  • cpu: 250m
  • memory: "642037204" - aprox. 642Mi

for resource requests versus

  • cpu: 1
  • memory: "64Mi"

what we gave in the original deployment.

According to the documentation the values recommended by VPA Recommender will be applied to the pod by VPA Admission Controller upon the creation of the pod. Thus we delete our pod, the Deployment will take care of spinning up a new one. The new one will have resources requests set by VPA Admission Controller instead of inheriting the values from the Deployment.

$ kubectl delete po dyn-class-gen-deployment-7db4f5c557-l97w9

$ kubectl describe po dyn-class-gen-deployment-7db4f5c557-pd9bc

Name:           dyn-class-gen-deployment-7db4f5c557-pd9bc                                                                                                                                 
Namespace:      default                                                                                                                                                                   
Node:           gke-gkecluster-seba-636-pool1-f8f0d428-6n1f/                                                                                                                    
Start Time:     Wed, 06 Jun 2018 08:38:01 +0200                                                                                                                                           
Labels:         app=dyn-class-gen                                                                                                                                                         
Annotations:    vpaUpdates=Pod resources updated by dyn-class-gen-vpa: container 0: cpu request, memory request
Status:         Running
Controlled By:  ReplicaSet/dyn-class-gen-deployment-7db4f5c557
    Container ID:   docker://688d6088efdc2045d56c4f187211e43f09f4654779bdaa3e50f6e378718cb976
    Image:          banzaicloud/dynclassgen:1.0
    Image ID:       docker-pullable://banzaicloud/dynclassgen@sha256:134835da5696f3f56b3cc68c13421512868133bcf5aa9cd196867920f813e785
    Port:           <none>
    State:          Running
      Started:      Wed, 06 Jun 2018 08:38:03 +0200
    Ready:          True
    Restart Count:  0
      cpu:     2
      memory:  1Gi
      cpu:     250m
      memory:  642037204
      DYN_CLASS_COUNT:          256
      JVM_OPTS:                 -Xmx300M
      /var/run/secrets/ from default-token-v7z2l (ro)
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-v7z2l
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations: for 300s
        for 300s
  Type    Reason                 Age   From                                                  Message
  ----    ------                 ----  ----                                                  -------
  Normal  Scheduled              31m   default-scheduler                                     Successfully assigned dyn-class-gen-deployment-7db4f5c557-pd9bc to gke-gkecluster-seba-636-po
  Normal  SuccessfulMountVolume  31m   kubelet, gke-gkecluster-seba-636-pool1-f8f0d428-6n1f  MountVolume.SetUp succeeded for volume "default-token-v7z2l"
  Normal  Pulled                 31m   kubelet, gke-gkecluster-seba-636-pool1-f8f0d428-6n1f  Container image "banzaicloud/dynclassgen:1.0" already present on machine
  Normal  Created                31m   kubelet, gke-gkecluster-seba-636-pool1-f8f0d428-6n1f  Created container
  Normal  Started                31m   kubelet, gke-gkecluster-seba-636-pool1-f8f0d428-6n1f  Started container

Opinionated conclusions

  • VPA is in it’s early stage and is expected to change its shape many times so early adopters should be prepared for it. Details on know limitations can be found here and on future work here
  • VPA adjusts only the resources requests of containers based on past and current resource usage observed. It doesn’t set resources limits. This can be problematic with misbehaving applications which start using more and more resources leading to pods to being killed by Kubernetes.

If you are interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:



comments powered by Disqus