Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Get Started

Upgrade to Istio 1.3 using the operator with ease

Check out Backyards in action on your own clusters: curl https://getbackyards.sh | sh && backyards install -a --run-demo What to know more? Get in touch with us, or delve into the details of the latest release.

Since releasing our open-source Istio operator, we’ve been doing our best to add support for the latest versions of Istio as rapidly as possible. Today, we’re happy to announce that we have added Istio 1.3 support for the Banzai Cloud Istio operator.

In this post, we’ll be outlining how to easily upgrade Istio control planes to 1.3 with the Banzai Cloud Istio operator, within a single-mesh multi-cluster topology or across a multi-cloud or hybrid-cloud service mesh.

Supporting Istio 1.3

The new Istio 1.3 release added a variety of new features and bug fixes. The largest of these was the experimental Mixerless HTTP telemetry, which is now also fully supported by our Istio operator. The full list of changes can be found in the official release notes.

Here is a list of new features we think are worth highlighting:

  • Experimental Mixerless HTTP telemetry
  • Automatic determination of HTTP or TCP for outbound traffic
  • Container ports are no longer required in the pod spec
  • Improved Pilot with reduced CPU utilization. For some specific deployments the decrease can be close to 90%
  • SDS support to deliver private keys and certificates to each Istio control plane service

Mixerless HTTP telemetry

There is an ongoing effort to move the logic at work in the centralized Mixer v1 (which provides rich telemetry) to proxies as Envoy filters. Istio 1.3 contains experimental support in sidecar proxies for standard Prometheus telemetry. It is a drop-in replacement for the http metrics currently produced by Mixer, namely: istio_requests_total, istio_request_duration_* and istio_request_size.

If you are interested in exploring how Istio telemetry works in conjunction with Mixer in greater detail, you may want to read our post on Istio telemetry.

How to enable it with the operator

There is a simple switch in the operator CR to turn on this experimental feature:

spec:
  mixerlessTelemetry:
    enabled: true

Differences

The istio_request_duration_ metric uses more granular buckets inside the proxy, which results in lower latency measurements in histograms. The new metric is called istio_request_duration_milliseconds.

Performance impact

  • istio-telemetry deployment can be switched off, saving 0.5 vCPU per 1000 rps of mesh traffic. This halves Istio’s CPU usage while collecting its standard metrics
  • The new filters uses 10% less CPU on istio-proxy, than the original Mixer filter
  • The P90 latency @1000 rps now adds 5ms over the plain proxy. The goal is to reduce this by half

As of now, no TCP metrics yet!


Single-mesh multi-cluster control plane upgrade with the Istio operator

Let’s suppose we have a Kubernetes master and remote cluster connected to a single-mesh multi-cluster topology with Istio 1.2.5, and we’d like to upgrade our Istio components on both clusters to Istio version 1.3.0. Here are the steps we’d need to go through in order to accomplish that with our operator:

  1. Deploy a version of our operator which supports Istio 1.3.x
  2. Apply a Custom Resource using Istio 1.3.0 components

It really is that easy!

Once the operator discerns that the Custom Resources it’s watching has changed, it reconciles all Istio-related components so as to perform a control plane upgrade. First, this happens on the master cluster, but then the modified images are automatically propagated to the remotes as well, and the Istio components installed on the remotes (usually Citadel, Sidecar Injector and Gateways) are also reconciled for use with new image versions.

Try it out

In this demo, we’ll perform the following steps:

  • We’ll create two Kubernetes clusters
  • We’ll form a single-mesh multi-cluster setup from the two clusters with Istio 1.2.5 installed on each of them
  • We’ll deploy an example application on both clusters
  • The Istio components will be upgraded to 1.3.0 with the operator (both on the master and on the remote)

Creating the clusters

For this demo we’ll need two Kubernetes clusters.

We created one Kubernetes cluster on GKE and one on AWS, using Banzai Cloud’s lightweight, CNCF-certified Kubernetes distribution, PKE via the Pipeline platform. If you’d like to do likewise, go ahead and create your clusters on any of the five cloud providers we support or on-premise using Pipeline for free.

Form a mesh

Next, we’ll take our clusters and form a single-mesh multi-cluster topology with Istio 1.2.5. If you need help with this, take a look at the demo part of our detailed blog post, Multi-cloud service mesh with the Istio operator. There, we describe precisely how to setup a single-mesh multi-cluster topology with Split Horizon EDS.

The mesh can also be created via the Pipeline UI with just a few clicks. On Pipeline, the entire process is streamlined and automated, with all the work being done behind the scenes.

Deploy an app on multiple clusters

Next we install a simple echo service as a way of checking if everything works after the control plane upgrade.

Create Gateway and VirtualService resources to reach the service through an ingress gateway.

First, deploy to the master cluster:

$ kubectl --context ${CTX_MASTER} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.2/docs/federation/multimesh/echo-service.yaml
$ kubectl --context ${CTX_MASTER} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.2/docs/federation/multimesh/echo-gw.yaml
$ kubectl --context ${CTX_MASTER} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.2/docs/federation/multimesh/echo-vs.yaml

$ kubectl --context ${CTX_MASTER} -n default get pods
NAME                    READY   STATUS    RESTARTS   AGE
echo-5c7dd5494d-k8nn9   2/2     Running   0          1m

Then deploy to the remote cluster:

$ kubectl --context ${CTX_REMOTE} -n default apply -f https://raw.githubusercontent.com/banzaicloud/istio-operator/release-1.1/docs/federation/multimesh/echo-service.yaml

$ kubectl --context ${CTX_REMOTE} -n default get pods
NAME                    READY   STATUS    RESTARTS   AGE
echo-595496dfcc-6tpk5   2/2     Running   0          1m

Determine the external hostname of the ingress gateway and make sure the echo service responds from both clusters:

$ export MASTER_INGRESS=$(kubectl --context=${CTX_MASTER} -n istio-system get svc/istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ for i in `seq 1 100`; do curl -s "http://${MASTER_INGRESS}/" | grep "Hostname"; done | sort | uniq -c
   61 Hostname: echo-5c7dd5494d-k8nn9
   39 Hostname: echo-595496dfcc-6tpk5

Upgrade control planes to Istio 1.3.0

To install Istio 1.3.0, we need to check out the release-1.3 branch of our operator (this branch supports Istio versions 1.3.x):

$ git clone git@github.com:banzaicloud/istio-operator.git
$ git checkout release-1.3

Install the Istio Operator

Simply run the following make goal from the project root in order to install the operator (KUBECONFIG must be set for your master cluster):

$ make deploy

This command will install a Custom Resource Definition in the cluster, and will deploy the operator to the istio-system namespace.

Apply the new Istio Custom Resource

If you’ve installed Istio 1.2.5 with the Istio operator, and if you check the logs of the operator pod at this point, you will see the following error message: intended Istio version is unsupported by this version of the operator. We need to update the Istio Custom Resource with Istio 1.3’s components so the operator will be reconciled with the Istio control plane.

To deploy Istio 1.3.0 with its default configuration options, use the following command:

$ kubectl --context=${CTX_MASTER} apply -n istio-system -f config/samples/istio_v1beta1_istio.yaml

After a little while, the Istio components on the master cluster will start using 1.3.0 images:

$ kubectl --context=${CTX_MASTER} get pod -n istio-system -o yaml | grep "image: docker.io/istio" | sort | uniq
    image: docker.io/istio/citadel:1.3.0
    image: docker.io/istio/galley:1.3.0
    image: docker.io/istio/mixer:1.3.0
    image: docker.io/istio/pilot:1.3.0
    image: docker.io/istio/proxyv2:1.3.0
    image: docker.io/istio/sidecar_injector:1.3.0

Notice, Istio components are now using 1.3.0 images on the remote cluster as well:

$ kubectl --context=${CTX_REMOTE} get pod -n istio-system -o yaml | grep "image: docker.io/istio" | sort | uniq
      image: docker.io/istio/citadel:1.3.0
      image: docker.io/istio/proxyv2:1.3.0
      image: docker.io/istio/sidecar_injector:1.3.0

Check the app

At this point, your Istio control plane will be upgraded to Istio 1.3.0 and your echo application will still be available at:

$ curl -s "http://${MASTER_INGRESS}/"

In order to change older versions of the istio-proxy sidecar in the echo pods (to perform a data plane upgrade), we need to restart the pods manually.

Takeaway

The Istio operator now supports Istio 1.3. Upgrading Istio control planes between Istio’s major versions with our operator, even in a single-mesh multi-cluster setup, is as easy as deploying a new version of the operator, then applying a new Custom Resource using your desired component versions.

Obviously, this is a process that’s completely automated and hyper-simplified with Backyards.


About Backyards

Banzai Cloud’s Backyards is a multi and hybrid-cloud enabled service mesh platform for constructing modern applications. Built on Kubernetes, our Istio operator and Pipeline enables flexibility, portability and consistency across on-premise datacenters and on five cloud environments. Use our simple, yet extremely powerful, UI and CLI, and experience automated canary releases, traffic shifting, routing, secure service communication, in-depth observability and more, for yourself.

#multicloud #hybridcloud #BanzaiCloud

If you are interested in our technology and open source projects, follow us on GitHub, LinkedIn or Twitter:


Comments

comments powered by Disqus