Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Sign in

In one of our previous posts about creating Helm Charts for Kubernetes, we outlined what we consider the best practices for creating Helm charts. We’ve been using Helm in production and investing our time in creating Helm charts (available on the Banzai Cloud Charts GitHub repository) since Banzai Cloud’s inception. Creating Helm Charts is one thing; storing and serving them is another. We’d like to reduce the burden this places on the user, so today marks the launch of our Helm Chart repository service, which you can use to store and serve public Helm Charts for free.

Read more...

In our last post about using Cadence workflows to spin up Kubernetes we outlined the basic concept of Cadence and walked you through how to use the Cadence workflow engine. Let’s dive into the experiences and best practices associated with implementing complex workflows in Go. We will use the deployment of our PKE Kubernetes distribution, from Pipeline to AWS EC2 as an example. Of course, you can deploy PKE independently, but Pipeline takes care of your cluster’s entire life-cycle , starting from nodepool and instance type recommendations, through infrastructure deployment, certificate management, opt-in deployment and configuration of our powerful monitoring, logging, service mesh, security scan, and backup/restore solutions, to the scaling or termination of your cluster.

Read more...

Hybrid- and multi-cloud are quickly becoming the new norm for enterprises, just as service mesh is becoming essential to the cloud native computing environment. From the very beginning, the Pipeline platform has supported multiple cloud providers and wiring them together at multiple levels (cluster, deployments and services) was always one of the primary goals. We supported setting up multi-cluster service meshes from the first release of our open source Istio operator.

Read more...

A strong focus on security has always been a key part of the Banzai Cloud’s Pipeline platform. We incorporated security into our architecture early in the design process, and developed a number of supporting components to be used easily and natively on Kubernetes. From secrets, certificates generated and stored in Vault, secrets dynamically injected in pods, through provider agnostic authentication and authorization using Dex, to container vulnerability scans and lots more: the Pipeline platform handles all these as a default tier-zero feature.

Read more...

When we added support for Istio’s service mesh in the Pipeline platform, we experienced first hand how the deployment and management of Istio can become increasingly complex. We realized that we weren’t the only ones managing Istio with Helm dealing with these problems - that demand was emerging for an Istio operator (e.g. #9333). We decided to build an Istio operator of our own, and more than a month ago we open sourced it.

Read more...

Two weeks ago we introduced our Kafka Spotguide for Kubernetes - the easiest way to deploy and operate Apache Kafka on Kubernetes. Since then, it’s been integrated into our application and DevOps container management platform, Pipeline, among other spotguides such as Spark on Kubernetes, Zeppelin, NodeJS and Golang, just to name a few. Because we’ve already met our goal of making it easy set up a Kafka cluster on Kubernetes with just few clicks, and in less than ten minutes - provisioning and operating its entire infrastructure, both in Kubernetes and Kafka - we’ve shifted our focus to Kafka security.

Read more...

At Banzai Cloud we are building a feature rich enterprise-grade application and devops container management platform, called Pipeline and a CNCF certified Kubernetes distribution, PKE. Security is one of our main areas of focus, and we strive to automate and enable those security patterns we consider essential for all the enterprises that use Pipeline. For us, Istio is no exception, in that we apply the best available security practices to the service mesh, while maintaining the sleekest, most automated user experience possible.

Read more...

One of the main goals of the Banzai Cloud Pipeline platform and PKE Kubernetes distribution is to radically simplify the whole Kubernetes experience and execute complex operations on behalf of the users. These operations communicate with a number of different remote services (from cloud providers to on-prem virtualization or storage providers) where we have little or no way to influence the result of these calls: how long will it take, will it ever succeed, whether it provides the desired result and so.

Read more...

At Banzai Cloud we strive to enable a secure software supply chain which ensures that applications deployed with the Pipeline platform and Pipeline Kubernetes Engine are secure, without reducing developer productivity across all environments (on-premise, multi-, hybrid-, and edge-cloud). While we have our own internal processes and a dedicated security team working full time on hardening the entire application platform stack, it also makes sense to provide confidence to our customers following industry standard benchmarks.

Read more...

One of the core features of Pipeline, Banzai Cloud’s application and devops container management platform, is multi-dimensional autoscaling based on default and custom metrics. Upon our introduction of custom metrics, we opted for an approach that relied on the Prometheus Adapter to gather metrics from Prometheus. Since then, a lot of our customers have begun using Hoizontal Pod Autoscaling, and most of them have been satisfied with only basic CPU & memory metrics.

Read more...