Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Get Started
One of our goals at Banzai Cloud is to eliminate the concept of nodes, insofar as that is possible, so that users will only be aware of their applications and respective resource needs (cpu, gpu, memory, network, etc). Launching Telescopes was a first step in that direction - helping end users to select the right instance types for the job, through Telescopes infrastructure recommendations, then turning those recommendations into actual infrastructure with Pipeline.
Read more...
For our Pipeline Platform, observability is an essential part of operating distributed applications in production. We put a great deal of effort into monitoring large and federated clusters, and automating the centralized log collection of these clusters with Pipeline. That way, all our users get out-of-the-box observability for free. Logging series: Centralized logging under Kubernetes Secure logging on Kubernetes with Fluentd and Fluent Bit Advanced logging on Kubernetes
Read more...
If you are looking to try out an automated way to provision and manage Kafka on Kubernetes, please follow this Kafka on Kubernetes the easy way link. At Banzai Cloud we use Kafka internally a lot. We have some internal systems and customer reporting deployments where we rely heavily on Kafka deployed to Kubernetes. We practice what we preach and all these deployments (not just the external ones) are done using our application platform, Pipeline.
Read more...
At Banzai Cloud we’re always open to experimenting with and integrating new software (tools, products). We also love to validate our new ideas by quickly implementing “proof of concept” projects. Even though we used five or so programming languages while building the Pipeline Platform, we love and use Golang the most. While these PoC projects are not intended for production use, they often serve as the basis for it. When this is the case, the PoC code needs to be refactored - or prepared for production.
Read more...
At Banzai Cloud we place a lot of emphasis on the observability of applications deployed to the Pipeline Platform, which we built on top of Kubernetes. To this end, one of the key components we use is Prometheus. Just to recap, Prometheus is: an open source systems monitoring and alerting tool a powerful query language (PromQL) a pull based metrics gathering system a simple text format for metrics exposition Problem statement Usually, legacy applications are not exactly prepared for these last two, so we need a solution that bridges the gap between systems that do not speak the Prometheus metrics format: enter exporters.
Read more...
May 24 2018

kurun

Author
During the development of the Pipeline Platform all of its key building blocks such as Pipeline, Hollowtrees and Bank-Vaults have relied on making extensive Kubernetes API calls. Often, we tried a quick K8s API call or ran a small PoC inside a cluster, while also wanting to avoid the usual deployment process. We quickly realized that we needed a shortcut. There are tools like telepresence that support slightly more complex scenarios.
Read more...
Creating Kubernetes clusters in the cloud and deploying (or CI/CDing) applications to those clusters is not always simple. There are a few conventional options, but they are either cloud or distribution specific. While we were working on our open source Pipeline Platform, we needed a solution which covered (here follows an inclusive but not exhaustive list of requirements): provisioning of Kubernetes clusters on all major cloud providers (via REST, UI and CLI) using a unified interface application lifecycle management (on-demand deploy, CI/CD, dependency management, etc) preferably over a REST interface support for multi tenancy, and advanced security scenarios (app to app security with dynamic secrets, standards, multi-auth backends, and more) ability to build cross-cloud or hybrid Kubernetes environments This posts highlights the ease of creating Kubernetes clusters using the Pipeline API on the following providers:
Read more...
At Banzai Cloud we continue to work hard on the Pipeline platform we’re building on Kubernetes. We’ve open sourced quite a few operators already, and even recently teamed up with Red Hat and CoreOS to begin work on Kubernetes Operators using the new Operator SDK, and to help move human operational knowledge into code. The purpose of this blog will be to take a dive deep into the PVC Operator.
Read more...
At Banzai Cloud we’re building a feature rich platform, Pipeline, on top of Kubernetes. With Pipeline we provision large, multi-tenant Kubernetes clusters on all major cloud providers - AWS, GCP, Azure and BYOC - and deploy all kinds of predefined or ad-hoc workloads to these clusters. We wanted to set the industry standard for the way in which our users log in and interact with secure endpoints, and, at the same time, we wanted to provide dynamic secret management for each application we support.
Read more...
At Banzai Cloud we run and deploy containerized applications to our PaaS, Pipeline. Java or JVM-based workloads, are among the notable workloads deployed to Pipeline, so getting them right is pretty important for us and our users. Java/JVM based workloads on Kubernetes with Pipeline Why my Java application is OOMKilled Deploying Java Enterprise Edition applications to Kubernetes A complete guide to Kubernetes Operator SDK Spark, Zeppelin, Kafka on Kubernetes
Read more...