Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Sign in

At Banzai Cloud we are passionate about observability, and we expend a great amount of effort to make sure we always know what’s happening inside our Kubernetes clusters. All clusters provisioned with Pipeline - our multi- and hybrid-cloud container management platform - are provided with, and rely upon, each of the three pillars of observability: federated monitoring, centralized log collection and traces. In order to automate log collection on Kubernetes, we opensourced a logging-operator built on the Fluent ecosystem.

Read more...

Logs (one of the three pillars of observability besides metrics and traces) are an indispensable part of any distributed application. Whether we run these applications on Kubernetes or not, logs are one of the best ways to diagnose and verify an application state. One of the key features of our Kubernetes platform, Pipeline, is to provide out-of-the-box metrics, trace support and log collection. This post highlights some of the behind the scenes automation we’ve constructed in order to achieve this.

Read more...

At Banzai Cloud we put a lot of emphasis on observability, so we automatically provide centralized monitoring and log collection for all clusters and deployments done through Pipeline. Over the last few months we’ve been experimenting with different approaches - tailored and driven by our customers’ individual needs - the best of which are now coded into our open source Logging-Operator. Just to recap, here are our earlier posts about logging using the fluent ecosystem Centralized log collection on Kubernetes.

Read more...

In this blog we’ll continue our series about Kubernetes logging, and cover some advanced techniques and visualizations pertaining to collected logs. Just to recap, with our open source PaaS, Pipeline, we monitor and collect/move a large number of the logs for the distributed applications we push to Kubernetes. We are expending a lot of effort to monitor large and federated clusters, and to automate these with Pipeline, so that our users receive out of the box monitoring and log collection for free.

Read more...

As we eluded to in the last post in this series, we’ll be continuing our discussion of centralized and secure Kubernetes logging/log collection. Log messages can contain sensitive information, so it’s important to secure transport between distributed parts of the log flow. This post will describe how we’ve secured moving log messages on our Kubernetes clusters provisioned by Pipeline. Logging series: Centralized logging under Kubernetes Secure logging on Kubernetes with Fluentd and Fluent Bit

Read more...

For our Pipeline PaaS, monitoring is an essential part of operating distributed applications in production. We put a great deal of effort into monitoring large and federated clusters and automating these with Pipeline, so all our users receive out of the box monitoring for free. You can read about our monitoring series, below: Monitoring series: Monitoring Apache Spark with Prometheus Monitoring multiple federated clusters with Prometheus - the secure way Application monitoring with Prometheus and Pipeline Building a cloud cost management system on top of Prometheus Monitoring Spark with Prometheus, reloaded

Read more...