Two months ago we announced the release of Backyards, Banzai Cloud’s multi- and hybrid-cloud enabled service mesh built on top of our Istio operator. One of Backyards’ hallmarks is its ability to simplify building a production-ready Istio deployment down to a single command:
backyards install -a - complete with enterprise grade security, monitoring, tracing, logs, audit, and features like canary releases, traffic management, circuit breaking and lots more, either through a convenient UI, CLI or a GraphQL API.
Nevertheless, one feature was missing from Backyards: the option to build an Istio service mesh that spans multiple clusters. While the Banzai Cloud open source Istio operator has long since supported such a feature (check out the Istio operator multi-cluster scenarios documentation), it was missing from Backyards.
Check out Backyards in action on your own clusters!
Register for an evaluation version and run a simple install command!
Previously, we have also made multi-cluster deployments, service meshes, federation and other features available in Pipeline, Banzai Cloud’s container management platform for building multi- and hybrid-clouds.
We are happy to announce that multi-cluster management will be baked into the next major version of Backyards. And, in this post, we are going to go into detail about just how easy it will be to manage a multi-cluster service mesh with Backyards.
Typical multi-cluster-based patterns are single mesh - combine multiple clusters into one unit managed by one Istio control plane. Mesh federation is when multiple clusters act as individual management domains, and the service exposure between those domains is done selectively.
Single mesh scenarios are best suited to use cases wherein clusters are configured together, sharing resources and are generally treated as one infrastructural component within an organization.
Install Backyards CLI 🔗︎
Register for an evaluation version and run the following command to install the CLI tool:
curl https://getbackyards.sh | sh
Create two clusters 🔗︎
For this demo we’ll need two Kubernetes cluster.
I created two Kubernetes cluster on AWS, using Banzai Cloud’s lightweight, CNCF-certified Kubernetes distribution, PKE, via the Pipeline platform. If you’d like to do likewise, go ahead and create your clusters on any of the several cloud providers we support, or on-premise, using Pipeline for free.
Install Backyards to one of the clusters 🔗︎
In a typical single mesh scenario, a single Istio control plane exists on a cluster that receives information about service and pod states from its peers. To accomplish this, the kubeconfig of each peer cluster must be added to the cluster where the control plane is running, in the form of a k8s secret.
The following command will install Backyards and deploy a service mesh to the selected cluster.
❯ backyards install -a INFO customresourcedefinition.apiextensions.k8s.io:istios.istio.banzaicloud.io configured INFO customresourcedefinition.apiextensions.k8s.io:remoteistios.istio.banzaicloud.io configured INFO customresourcedefinition.apiextensions.k8s.io:istios.istio.banzaicloud.io - pending INFO customresourcedefinition.apiextensions.k8s.io:istios.istio.banzaicloud.io - ok ... ... NFO gateway.networking.istio.io:backyards-system/backyards-ingressgateway - pending INFO gateway.networking.istio.io:backyards-system/backyards-ingressgateway - ok INFO virtualservice.networking.istio.io:backyards-system/backyards-ingressgateway - pending INFO virtualservice.networking.istio.io:backyards-system/backyards-ingressgateway - ok
The status of the mesh can be checked via the following commands.
❯ backyards istio overview Mesh overview – metrics time span 60 seconds Clusters Services in mesh Workloads in mesh Pods in mesh Error rate Latency RPS 1 30 4 33 3 46 3 -1 0.01075 0 ❯ backyards istio cluster status Clusters in the mesh Name Type Status Gateway Address Message mesh Host Available [188.8.131.52 184.108.40.206]
Attach a peer cluster to the mesh 🔗︎
A peer cluster is any participant cluster in a single mesh. Backyards automates the process of creating the resources necessary for the peer cluster, generates and sets up the kubeconfig for that cluster, and attaches the cluster to the mesh. The only other thing we need to do is make sure the kubeconfig for the peer cluster has the requisite RBAC permissions.
❯ backyards istio cluster attach ~/kubeconfigs/waynz0r-by-114.yaml ? Are you sure to use the following context? kubernetes-admin@waynz0r-by-114 (API Server: https://220.127.116.11:6443) Yes INFO creating service account and rbac permissions INFO namespace:istio-system created INFO serviceaccount:istio-system/istio-operator created INFO clusterrole.rbac.authorization.k8s.io:istio-operator configured INFO clusterrolebinding.rbac.authorization.k8s.io:istio-operator configured INFO retrieving service account token INFO attaching cluster 'waynz0r-by-114' is started successfully. Use `backyards istio cluster status` to follow the progress.
It may take some time to attach the peer cluster, because it needs the ingress gateway address to work
Check the status of the mesh with the following command:
❯ backyards istio overview Mesh overview – metrics time span 60 seconds Clusters Services in mesh Workloads in mesh Pods in mesh Error rate Latency RPS 2 30 4 37 3 77 3 -1 0.00475 0 ❯ backyards istio cluster status Clusters in the mesh Name Type Status Gateway Address Message mesh Host Available [18.104.22.168 22.214.171.124] waynz0r-by-114 Peer Available [126.96.36.199 188.8.131.52]
Backyards comes with a built-in demo application for demonstration purposes. As there are multiple clusters in the mesh, the microservices that compose the mess should span these clusters.
The following command will deploy some of the services onto the host cluster:
❯ backyards demoapp install -s frontpage,catalog,bookings INFO namespace:backyards-demo created INFO service:backyards-demo/analytics created INFO service:backyards-demo/bookings created ... ... INFO virtualservice.networking.istio.io:backyards-demo/movies - pending INFO virtualservice.networking.istio.io:backyards-demo/movies - ok
The rest of the application can be deployed to the peer cluster via the following command:
❯ backyards -c ~/kubeconfigs/waynz0r-by-114.yaml demoapp install -s movies,payments,notifications,analytics --peer INFO namespace:backyards-demo created INFO service:backyards-demo/analytics created INFO service:backyards-demo/bookings created INFO service:backyards-demo/catalog created ... ... INFO deployment.apps:backyards-demo/notifications-v1 - pending INFO deployment.apps:backyards-demo/notifications-v1 - ok INFO deployment.apps:backyards-demo/payments-v1 - pending INFO deployment.apps:backyards-demo/payments-v1 - ok
Backyards has a built-in load tester tool, which you can use to seamlessly generate traffic to the demo application. After the installation of each component has finished, send some traffic and open the Backyards UI. You should be able to see that communication is taking place between the microservices of the demo applications that span the two clusters.
❯ backyards demoapp load INFO Sending load to demo application duration=30 rps=10 INFO loader stopped INFO requestCount=300 responseCode=200 ❯ backyards dashboard INFO Logged in as kubernetes-admin INFO Opening Backyards UI at http://127.0.0.1:50500
❯ backyards -c ~/kubeconfigs/waynz0r-by-114.yaml demoapp uninstall ❯ backyards istio cluster detach waynz0r-by-114 ❯ backyards uninstall -a
We still believe that, while it may be hard to navigate the hype, expanding marketplace, and increasing complexity that surrounds service mesh, it’s one of the next big things.
Our intention is to inject some clarity into this situation by providing a product that leverages and integrates everything our customers need, and which will make the adoption and use of the service mesh as easy as possible.