Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
If you followed our blog series on Autoscaling on Kubernetes, you should already be familiar with Kubernetes’ Cluster autoscaler and the Vertical Pod Autoscaler used with Java 10 applications. This post will show you how to use the Horizontal Pod Autoscaler to autoscale your deployments based on custom metrics obtained from Prometheus. As a deployment example we’ve chosen our JEE Petstore example application on Wildfly to show that, beside metrics like cpu and memory, which are provided by default on Kubernetes, using our Wildfly Operator, all Java and Java Enterprise Edition / Wildfly specific metrics are automatically placed at your fingertips, available in Prometheus, allowing you to easily autoscale deployments.
Read more...
Jun 25 2018

Hands on Thanos

Author
Here at Banzai Cloud we blog a lot about Prometheus and how to use it. One of the problems we have so far neglected to discuss is the inadequate long term storage capability of Prometheus. Luckily a new project called Thanos seeks to address this. If you are not familiar with Prometheus, or are interested in other monitoring related articles, check out our monitoring series, here: Monitoring series: Monitoring Apache Spark with Prometheus Monitoring multiple federated clusters with Prometheus - the secure way Application monitoring with Prometheus and Pipeline Building a cloud cost management system on top of Prometheus Monitoring Spark with Prometheus, reloaded
Read more...
When we started to work on our cluster infrastructure recommender, Telescopes, we soon realized how difficult it was to get instance type attributes and pricing information from cloud providers programatically. While EC2, Google Cloud, and Azure all provide some kind of API from which to query this information, in some cases these APIs respond with partially inconsistent data, or their responses are large chunks of JSON files that are very cumbersome to parse.
Read more...
At Banzai Cloud we are building a feature rich enterprise-grade application platform, built for containers on top of Kubernetes, called Pipeline. With Pipeline we provision large, multi-tenant Kubernetes clusters on all major cloud providers such as AWS, GCP, Azure and BYOC, on-premise and hybrid, and deploy all kinds of predefined or ad-hoc workloads to these clusters. For us and our enterprise users, Kubernetes secret management (Base64) was woefully inadequate, so we chose Vault with native Kubernetes support to manage our secrets.
Read more...
This post highlights how the Pipeline Platform enables Managed Service Identity (MSI) and assigns the Storage Account Contributor role to AKS cluster Virtual Machines. But wait, why? At Banzai Cloud we have a PVC Operator, which makes using Kubernetes Persistent Volumes easier on cloud providers by dynamically creating the required accounts and storage classes. That operator allows us to use the same Helm chart on all supported providers, thus there is no need to create cloud specific Helm charts.
Read more...
At Banzai Cloud we provision all kinds of applications to Kubernetes and we try to autoscale these clusters with Pipeline and/or properly size application resources as needed. As promised in an earlier blog post, How to correctly size containers for Java 10 applications, we’ll share our findings on the Vertical Pod Autoscaler(VPA) used with Java 10. VPA sets resource requests on pod containers automatically, based on historical usage, thus ensuring that pods are scheduled onto nodes where appropriate resource amounts are available for each pod.
Read more...
One of our goals with Pipeline is to support Java and Java Enterprise Edition deployments, allowing developers to iterate fast while building and deploying safe, and also pushing code to production. In order to do that, we place a lot of importance on different aspects of a Java/JEE application’s lifecycle - we allow engineers: To continuously integrate and deploy their Java apps to Kubernetes To deploy Java Enterprise Edition applications to Kubernetes Once the Java containers are deployed to K8s, to avoid OOMKills To correctly size Java containers And, once deployments are done and sized, to monitor them without any code modification Enter Infinispan - a distributed cache and data grid.
Read more...
One of our goals at Banzai Cloud is to eliminate the concept of nodes, insofar as that is possible, so that users will only be aware of their applications and respective resource needs (cpu, gpu, memory, network, etc). Launching Telescopes was a first step in that direction - helping end users to select the right instance types for the job, through Telescopes infrastructure recommendations, then turning those recommendations into actual infrastructure with Pipeline.
Read more...
For our Pipeline Platform, observability is an essential part of operating distributed applications in production. We put a great deal of effort into monitoring large and federated clusters, and automating the centralized log collection of these clusters with Pipeline. That way, all our users get out-of-the-box observability for free. Logging series: Centralized logging under Kubernetes Secure logging on Kubernetes with Fluentd and Fluent Bit Advanced logging on Kubernetes
Read more...
If you are looking to try out an automated way to provision and manage Kafka on Kubernetes, please follow this Kafka on Kubernetes the easy way link. At Banzai Cloud we use Kafka internally a lot. We have some internal systems and customer reporting deployments where we rely heavily on Kafka deployed to Kubernetes. We practice what we preach and all these deployments (not just the external ones) are done using our application platform, Pipeline.
Read more...