Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact
Get Started
We are excited to announce that Banzai Cloud has joined the Cloud Native Computing Foundation! The CNCF and The Linux Foundation are expending extraordinary effort in helping to standardize open source technologies that enable the development, deployment, management and operation of next generation Cloud Native software stacks. Our mission is to bring Cloud Native to enterprises and with this announcement, we strive to help push container and cloud native technology standardization and interoperability forward.
Read more...
At Banzai Cloud we are building a feature-rich enterprise-grade application platform, built for containers on top of Kubernetes, called Pipeline. For an enterprise-grade application platform security is a must and it has many building blocks. Please read through the Security series on our blog to learn how we deal with a variety of security-related issues. Security series: Authentication and authorization of Pipeline users with OAuth2 and Vault Dynamic credentials with Vault using Kubernetes Service Accounts Dynamic SSH with Vault and Pipeline Secure Kubernetes Deployments with Vault and Pipeline Policy enforcement on K8s with Pipeline The Vault swiss-army knife The Banzai Cloud Vault Operator Vault unseal flow with KMS
Read more...
At Banzai Cloud we are building a feature rich enterprise-grade application platform, built for containers on top of Kubernetes, called Pipeline. With Pipeline we provision large, multi-tenant Kubernetes clusters on all major cloud providers such as AWS, GCP, Azure, Oracle, Alibaba and BYOC, on-premise and hybrid, and deploy all kinds of predefined or ad-hoc workloads to these clusters. For us and our enterprise users, Kubernetes secret management (base 64) was not sufficient, so we chose Vault and added Kubernetes support to manage our secrets.
Read more...
If you followed our blog series on Autoscaling on Kubernetes, you should already be familiar with Kubernetes’ Cluster autoscaler and the Vertical Pod Autoscaler used with Java 10 applications. This post will show you how to use the Horizontal Pod Autoscaler to autoscale your deployments based on custom metrics obtained from Prometheus. As a deployment example we’ve chosen our JEE Petstore example application on Wildfly to show that, beside metrics like cpu and memory, which are provided by default on Kubernetes, using our Wildfly Operator, all Java and Java Enterprise Edition / Wildfly specific metrics are automatically placed at your fingertips, available in Prometheus, allowing you to easily autoscale deployments.
Read more...
Jun 25 2018

Hands on Thanos

Author
Here at Banzai Cloud we blog a lot about Prometheus and how to use it. One of the problems we have so far neglected to discuss is the inadequate long term storage capability of Prometheus. Luckily a new project called Thanos seeks to address this. If you are not familiar with Prometheus, or are interested in other monitoring related articles, check out our monitoring series, here: Monitoring series: Monitoring Apache Spark with Prometheus Monitoring multiple federated clusters with Prometheus - the secure way Application monitoring with Prometheus and Pipeline Building a cloud cost management system on top of Prometheus Monitoring Spark with Prometheus, reloaded
Read more...
When we started to work on our cluster infrastructure recommender, Telescopes, we soon realized how difficult it was to get instance type attributes and pricing information from cloud providers programatically. While EC2, Google Cloud, and Azure all provide some kind of API from which to query this information, in some cases these APIs respond with partially inconsistent data, or their responses are large chunks of JSON files that are very cumbersome to parse.
Read more...
At Banzai Cloud we are building a feature rich enterprise-grade application platform, built for containers on top of Kubernetes, called Pipeline. With Pipeline we provision large, multi-tenant Kubernetes clusters on all major cloud providers such as AWS, GCP, Azure and BYOC, on-premise and hybrid, and deploy all kinds of predefined or ad-hoc workloads to these clusters. For us and our enterprise users, Kubernetes secret management (Base64) was woefully inadequate, so we chose Vault with native Kubernetes support to manage our secrets.
Read more...
This post highlights how the Pipeline Platform enables Managed Service Identity (MSI) and assigns the Storage Account Contributor role to AKS cluster Virtual Machines. But wait, why? At Banzai Cloud we have a PVC Operator, which makes using Kubernetes Persistent Volumes easier on cloud providers by dynamically creating the required accounts and storage classes. That operator allows us to use the same Helm chart on all supported providers, thus there is no need to create cloud specific Helm charts.
Read more...
At Banzai Cloud we provision all kinds of applications to Kubernetes and we try to autoscale these clusters with Pipeline and/or properly size application resources as needed. As promised in an earlier blog post, How to correctly size containers for Java 10 applications, we’ll share our findings on the Vertical Pod Autoscaler(VPA) used with Java 10. VPA sets resource requests on pod containers automatically, based on historical usage, thus ensuring that pods are scheduled onto nodes where appropriate resource amounts are available for each pod.
Read more...
One of our goals with Pipeline is to support Java and Java Enterprise Edition deployments, allowing developers to iterate fast while building and deploying safe, and also pushing code to production. In order to do that, we place a lot of importance on different aspects of a Java/JEE application’s lifecycle - we allow engineers: To continuously integrate and deploy their Java apps to Kubernetes To deploy Java Enterprise Edition applications to Kubernetes Once the Java containers are deployed to K8s, to avoid OOMKills To correctly size Java containers And, once deployments are done and sized, to monitor them without any code modification Enter Infinispan - a distributed cache and data grid.
Read more...