Placeholder image

Toader Sebastian

Fri, Dec 1, 2017


Scaling Spark made simple on Kubernetes

Apache Spark on Kubernetes series:
Introduction to Spark on Kubernetes
Scaling Spark made simple on Kubernetes
The anatomy of Spark applications on Kubernetes
Monitoring Apache Spark with Prometheus
Apache Spark CI/CD workflow howto
Spark History Server on Kubernetes
Spark scheduling on Kubernetes demystified
Spark Streaming Checkpointing on Kubernetes
Deep dive into monitoring Spark and Zeppelin with Prometheus
Apache Spark application resilience on Kubernetes

Apache Zeppelin on Kubernetes series:
Running Zeppelin Spark notebooks on Kubernetes
Running Zeppelin Spark notebooks on Kubernetes - deep dive
CI/CD flow for Zeppelin notebooks

Apache Kafka on Kubernetes series:
Kafka on Kubernetes - using etcd

This is the second post on the Spark on Kubernetes series and please allow me to recap a few items from the previous one.

Containerisation and cluster management technologies are constantly evolving in the cluster computing world. Apache Spark currently implements support for Apache Hadoop YARN and Apache Mesos, in addition to providing its own standalone cluster manager. In 2014, Google announced development of Kubernetes which has its own unique feature set and differentiates itself from YARN and Mesos. In a nutshell Kubernetes is an open source container orchestration framework which can run on local machine, on-premise or in the cloud. It has support for multiple container frameworks (currently it supports docker, rkt and clear containers) allowing users to choose their prefered one.

Although there is existing support for running a Spark standalone cluster on k8s, there are still major advantages and significant interest in having native execution support due to the limitations of standalone mode and the advantages of Kubernetes. This is what gave birth to the spark-on-k8s project, where we at Banzai Cloud are contributing as well, while we are building our PaaS, Pipeline.

Let’s see through a simple example how to upscale Spark automatically while it’s executing a simple SparkPI job running on an EC2 instances.

NAME                                        STATUS    ROLES     AGE       VERSION
ip-10-0-100-40.eu-west-1.compute.internal   Ready     <none>    15m       v1.8.3

We have no pods running but a shuffle service.

$  kubectl get po -o wide
NAME                                                              READY     STATUS    RESTARTS   AGE       IP          NODE
shuffle-lckjg                                                     1/1       Running   0          19s       10.46.0.1   ip-10-0-100-40.eu-west-1.compute.internal

Let’s submit SparkPi with dynamic allocation enabled and pass 50000 as input parameter to it and see what’s going on in the k8s cluster.

bin/spark-submit \
  --deploy-mode cluster \
  --class org.apache.spark.examples.SparkPi \
  --master k8s://https://34.242.4.60:443 \
  --kubernetes-namespace default \
  --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
  --conf spark.driver.cores="400m" \
  --conf spark.dynamicAllocation.enabled=true \
  --conf spark.shuffle.service.enabled=true \
  --conf spark.kubernetes.shuffle.namespace=default \
  --conf spark.kubernetes.shuffle.labels="app=spark-shuffle-service,spark-version=2.2.0" \
  --conf spark.local.dir=/tmp/spark-local \
  --conf spark.app.name=spark-pi \
  --conf spark.kubernetes.driver.docker.image=banzaicloud/spark-driver-py:v2.2.0-k8s-1.0.179 \
  --conf spark.kubernetes.executor.docker.image=banzaicloud/spark-executor-py:v2.2.0-k8s-1.0.179 \
  --conf spark.kubernetes.authenticate.submission.caCertFile=my-k8s-aws-ca.crt \
  --conf spark.kubernetes.authenticate.submission.clientKeyFile=my-k8s-aws-client.key \
  --conf spark.kubernetes.authenticate.submission.clientCertFile=my-k8s-aws-client.crt \
  local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.5.0.jar 50000
$ kubectl get po
NAME                                                              READY     STATUS    RESTARTS   AGE
spark-pi-1511535479452-driver                                     1/1       Running   0          2m
spark-pi-1511535479452-exec-1                                     0/1       Pending   0          38s

The driver is running inside the k8s cluster, so far so good. But as we can see there is only one executor scheduled and is not moving anywhere.

Let’s take a look at it:

$ kubectl describe po spark-pi-1511535479452-exec-1

...
...
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  11s (x8 over 1m)  default-scheduler  No nodes are available that match all of the predicates: Insufficient cpu (1), PodToleratesNodeTaints (1).

This tells us that there are no nodes in the k8s cluster that has enough resources for running spark executors.

After adding a new EC2 node to make more resources available in the k8s cluster, executors starts running as with the addition of the new node there are enough resources now.

NAME                                                           READY    STATUS              RESTARTS   AGE                                                                               
shuffle-lckjg                                                     1/1       Running             0          10m                                                                               
shuffle-ncprp                                                     1/1       Running             0          47s                                                                               
spark-pi-1511535479452-driver                                     1/1       Running             0          7m                                                                                
spark-pi-1511535479452-exec-1                                     0/1       ContainerCreating   0          4m

NAME                                                              READY     STATUS    RESTARTS   AGE
shuffle-lckjg                                                     1/1       Running   0          11m
shuffle-ncprp                                                     1/1       Running   0          56s
spark-pi-1511535479452-driver                                     1/1       Running   0          7m
spark-pi-1511535479452-exec-1                                     1/1       Running   0          5m

The scale up is handled transparently as k8s automatically started to execute the pending executors as soon as the required resource became available. We didn’t have to do anything in Spark, no reconfiguration nor install components whatsoever.

These above still involves manual steps to operate the cluster in a manual way - however our end goal and aim with our Apache Spark Spotguide is to automate the whole process. Our Paas, Pipeline is using Prometheus to extrernalize monitoring information from the cluster infrastructure, cloud and the running Spark applicstion itself and wires those back to Kubernetes to anticipate and make upscales/downscales as easy and automated as possible, while maintaining the predefined SLA rules.

As usual we have open sourced the Spark images and charts needed to run Spark natively on Kubernetes and made it available at Banzai Cloud GitHub repository.

The next post will be about Spark on Kubernetes internals - as a quick glance into what is coming check this sequence diagram highlighting the internal flow of events of how a Spark cluster works with/inside Kubernetes. Yes, it might not be straightforward yet but rest assured that after going through these series the details will be crystal clear.

Spark on K8S

If you are interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:

Star



Comments

comments powered by Disqus