Check out Backyards in action on your own clusters!
Register for an evaluation version and run a simple install command!
As you might know, Cisco has recently acquired Banzai Cloud. Currently we are in a transitional period and are moving our infrastructure. Contact us so we can discuss your needs and requirements, and organize a live demo.
In creating the operator, our main goal was to simplify the deployment and management of Istio’s components. This release is still in alpha, and its main goal is still to replace Helm charts as a preferred means of installing Istio, but it provides a few additional features we think you’ll find convenient.
For more information about what motivated us to build our operator, and for an overview of its core design ideas, read our announcement post.
The new release 🔗︎
Istio 1.1 has not been released, but it is well into its candidate phase, and we expect it to be released soon. Its preliminary docs are already available on istio.io. This new release is the first since Istio was officially deemed production ready - and that was 8 months ago - so it contains a lot of bug fixes, enhancements and new features. We don’t want to list its changelog here, but you can peruse the official release notes at your leisure.
Our operator is still in the alpha phase and doesn’t provide support for every new features, but we’ve been working hard to catch up to the official Helm charts in regards to feature completion. We’ve tried to prioritize those features that were brought to our attention by early adopters of the operator. Thanks for reporting these issues; it’s great that there’s already so much interest and traction around this project.
Here’s a list of some of the features and configs we’ve added:
- The Zipkin/Jaeger tracer address is configurable
- Node agents can be enabled to provision identity through SDS
- Outbound traffic policy allows any egress traffic, by default
- Pod disruption budgets are configured for control plane components
- Galley’s Mesh Control Protocol (MCP) can be enabled to configure Mixer and Pilot
- Trace sampling can be changed through a custom resource
- User defined labels and annotations on Gateway services can be used to configure cloud provider ELBs
- Probe rewrite can be used to send a probe request to pilot-agent
- Core dump init container may now be enabled
- Secrets created by Citadel are cleaned up after Istio is deleted
- Envoy proxies do not run in privileged mode by default
Try it out! 🔗︎
The operator can install either Istio
1.1, and can run on Minikube
v0.33.1+ and Kubernetes
1.11.0+. To install the
1.1 release, you’ll need to check out the
release-1.1 branch after cloning the operator’s Github repo.
git clone firstname.lastname@example.org:banzaicloud/istio-operator.git git checkout release-1.1
Of course, first, you’ll need a Kubernetes cluster. You can create one by using Pipeline in your datacenter, or on one of the six cloud providers we support.
To try out our operator, point
KUBECONFIG to your cluster and simply run the following
make goal from the project root.
This command will install a custom resource definition in the cluster, and will deploy the operator to the
As is typical of operators, this will allow you to specify your Istio configurations to a custom Kubernetes resource.
Installing the Operator with Helm
Alternatively, if you just can’t let go of Helm completely, you can deploy the operator using a Helm chart, which is still available in the Banzai Cloud stable Helm repo:
helm repo add banzaicloud-stable http://kubernetes-charts.banzaicloud.com/branch/master helm install --name=istio-operator --namespace=istio-system banzaicloud-stable/istio-operator
Applying the custom resource
Once you’ve applied the custom resource to your cluster, the operator will start reconciling all of Istio’s components.
There are some sample custom resource configurations in the
config/samples folder. To deploy Istio with its default configuration options, use the following command:
kubectl apply -n istio-system -f config/samples/istio_v1beta1_istio.yaml
After some time, you should see that the Istio pods are running:
$ kubectl get pods -n istio-system --watch NAME READY STATUS RESTARTS AGE istio-citadel-dbc758cbc-qstjr 1/1 Running 0 2m21s istio-egressgateway-65d4bb9965-lqv24 1/1 Running 0 2m21s istio-galley-d667788b5-qsz7z 1/1 Running 0 2m21s istio-ingressgateway-56cc758cc-q825r 1/1 Running 0 2m21s istio-operator-controller-manager-0 2/2 Running 0 2m12s istio-pilot-8489bd645b-dxxn4 2/2 Running 0 5m27s istio-policy-77c878c857-2jsxg 2/2 Running 0 2m20s istio-sidecar-injector-5594d4d777-nbjn7 1/1 Running 0 2m19s istio-telemetry-6dbc57b8db-2hqg4 2/2 Running 0 2m20s
And that the
Istio custom resource is showing
Available in its status field:
$ kubectl describe istio -n istio-system istio Name: istio-sample Namespace: istio-system Labels: controller-tools.k8s.io=1.0 Annotations: <none> API Version: istio.banzaicloud.io/v1beta1 Kind: Istio Metadata: Creation Timestamp: 2019-03-14T10:13:28Z Finalizers: istio-operator.finializer.banzaicloud.io Generation: 2 Resource Version: 12479 Self Link: /apis/istio.banzaicloud.io/v1beta1/namespaces/istio-system/istios/istio-sample UID: cf881ff7-4641-11e9-b7de-42010a9c0258 Spec: Auto Injection Namespaces: default Citadel: Image: docker.io/istio/citadel:1.1.0-rc.4 Replica Count: 1 Default Pod Disruption Budget: Enabled: true Galley: Image: docker.io/istio/galley:1.1.0-rc.4 Replica Count: 1 Gateways: Egress: Max Replicas: 5 Min Replicas: 1 Replica Count: 1 Sds: Image: gcr.io/istio-release/node-agent-k8s:release-1.1-latest-daily Ingress: Max Replicas: 5 Min Replicas: 1 Replica Count: 1 Sds: Image: gcr.io/istio-release/node-agent-k8s:release-1.1-latest-daily K 8 Singress: Include IP Ranges: * Mixer: Image: docker.io/istio/mixer:1.1.0-rc.4 Max Replicas: 5 Min Replicas: 1 Replica Count: 1 Mtls: false Node Agent: Image: docker.io/istio/node-agent-k8s:1.1.0-rc.4 Outbound Traffic Policy: Mode: ALLOW_ANY Pilot: Image: docker.io/istio/pilot:1.1.0-rc.4 Max Replicas: 5 Min Replicas: 1 Replica Count: 1 Trace Sampling: 1 Proxy: Image: docker.io/istio/proxyv2:1.1.0-rc.4 Proxy Init: Image: docker.io/istio/proxy_init:1.1.0-rc.4 Sds: Sidecar Injector: Image: docker.io/istio/sidecar_injector:1.1.0-rc.4 Replica Count: 1 Rewrite App HTTP Probe: true Tracing: Zipkin: Address: zipkin.istio-system:9411 Status: Error Message: Status: Available Events: <none>
Please note that our Istio operator is still under heavy development, and new releases might introduce breaking changes. We strive to maintain backward compatibility insofar as that is feasible, while rapidly adding new features. Issues, new features, or bugs are tracked on the project’s GitHub page - feel free to contribute yours!
Some significant features and future items on the short term roadmap, sorted by priority:
1. Seamless control plane and data plane upgrades
Currently, upgrading Istio involves a few manual steps, especially when changing any sidecars that might be running old versions from the Istio proxy. And it gets even more complicated if an in-place upgrade is impossible because of traffic disruptions. We’re already working on providing a seamless upgrade path through the operator for both the control plane and for sidecars.
2. Integration with Prometheus, Grafana and Jaeger
We’ve stated before that it shouldn’t be the responsibility of Istio to manage these components, but, instead, to provide easy routes to integration. We’ve already started some work with that end in mind, and you can follow our discussion via relevant issues page on Github.
3. Enhanced multi-cluster federation
Current multi-cluster support in the operator relies on having a flat network in which pod IPs are routable from one cluster to another. Istio 1.1 introduces new options for federation, as well as for both single and multi control plane setups.
4. Enabling the
In the new Istio release, Kiali will officially replace the Service Graph add-on when observing the service mesh. The operator cannot install Kiali, but we’d like to make it so it can, soon.
5. Security improvements
Currently, the operator needs full admin permissions in a Kubernetes cluster to be able to work properly; its corresponding role is generated by
make deploy. The operator creates a bunch of additional roles for Istio, so, because of privilege escalation prevention, the operator needs to have all its permissions contained within those roles, otherwise - from Kubernetes 1.12 onwards - it must have the
escalate permission for
Contributing and development 🔗︎
If you’re interested in this project, we’re happy to accept contributions. You can support us just by giving a star to the repo, or if you want to help with development, read our contribution and development guidelines in our previous blog post.
About Banzai Cloud Pipeline 🔗︎
Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.