Placeholder image

Sandor Magyari

Thu, May 10, 2018

Deploying Java Enterprise Edition applications to Kubernetes

Some good years ago, back in the beginning of this century most of us here at Banzai Cloud were in the Java Enterprise business building application servers (BEA Weblogic and JBoss) and lots of JEE applications. Those days are gone, technology stack and landscape has dramatically changed, monolithic applications are out of fashion these days but still we have a lot of them running in production. Because our background we have a kind of personal commitment to help moving Java enterprise edition business applications towards microservices, managed deployments, Kubernetes and the cloud using Pipeline.

We have a full and automated migration path of JEE (and in general monolith Java) applications which we will be blogging and opensourcing, so stay tuned. This blog is the first and easiest step - and a transition which surprisingly fits many.

So, the first step in the journey of converting J2EE/JEE legacy applications to a suite of microservices is usually to package the whole Application Server and JEE application within a container then run multiple containers of this in the cloud or Kubernetes. The advantage is that there’s no - or at least only a little - code changes required and as you will see below in the case of our example application you can have quickly a cluster of servers running your application providing comfortable JEE services like session replication, EJB, Java Messaging and so on.

These Java containers needs to be properly sized

As an application server, we picked Wildfly App Server which is a quite popular open source JEE server offering full stack JEE services and it’s also modular. You can startup servers in standalone mode, they discover each other, then you can let Kubernetes resize your cluster based on different metrics and resource requests. Note that this practice should work with other application servers, given they provide some discovery mechanism.

WildFly, formerly known as JBoss AS, or simply JBoss, is an open source application server authored by JBoss, now developed by Red Hat.

JEE Wildfly deployment


  • Our goal is to demonstrate that with no or only a little code descriptor change you can easily port a standard JEE application to a Kubernetes cluster. For this purpose we picked the Petstore Application for Java EE 7 which covers the JEE specification and we want to deploy it to several Wildfly App Servers running in distributed mode on Kubernetes. For deployment we’ll use a Kubernetes operator for Wildfly which gives users the ability to create and manage Wildfly applications just like built-in Kubernetes resources. We need to add some additional resources to the default Wildfly images to support distributed mode on Kubernetes, also some minor modifications to the Petstore app.

  • We opensourced a Wildfly operator

  • We are wiring Pipeline’s CI/CD component to automatically build a Kubernetes ready JEE app from code. This will be similar to the already supported Spark, Zeppelin, Spring Boot, NodeJS and many other supported frameworks

  • We strive to size these Java containers right

Wildfly Docker image

We want to start up the WildFly application server in standalone mode but we also want to have session replication, Infinispan caches, EJB layer and JMS to work so we want to form an HA cluster. The basement of HA Clustering in Wildfly is JGroups. By default cluster member discovery is based on multicast protocol, which is not really available in the cloud. Fortunately KUBE_PING Kubernetes discovery protocol for JGroups can help us here, as it discovers nodes by asking Kubernetes for a list of IP addresses of selected cluster nodes, using label and namespace selectors. Nodes are joining together with JGroups configured to use the KUBE_PING protocol which finds each Pod running a Wildfly server. This is not included by default in the Wildfly image, so we have added and also configured in our default standalone.xml config to use this as base for clustering.

Modifications required to Petstore app

To achieve session replication across nodes a small temporary fix was necessary to LoggingInterceptor, plus we added a distributable tag to web.xml to enable clustering of web applications, other than that we only added resources necessary for building the Wildfly image. You can checkout our modifications in our fork in the master-k8s branch on our GitHub repository of petstore-ee7-kubernetes.

Wildfly operator

Wildfly Operator lets you describe and deploy your JEE application on the Wildfly server by creating the WildflyAppServer custom resource definition in Kubernetes. Once you deploy a WildflyAppServer resource, the operator starts up a WildFly server cluster with a given number of nodes running from the provided container image in standalone mode. JEE application to deploy must be in the provided image in /opt/jboss/wildfly/standalone/deployments folder. It will also start a load balancer service so that Wildfly Management Console and deployed Web application are reachable from outside. You can use the default standalone Full HA configuration for Kubernetes or you can provide your own in a ConfigMap. In this case you have to specify the name of the config map and the key containing the standalone.xml configuration. The default Wildfly username/password is admin/wildfly, you can set up your own in a Kubernetes Secret which should have the same name as your WildflyAppServer resource. Wildfly operator is built on the new Kubernetes Operator SDK; you can find more details on this here.

Steps to start up WildflyAppServer nodes on Kubernetes

  1. Create cluster

    You may use your Minikube cluster, or quickly provision a cluster in the cloud AWS / Azure / Google Cloud with Pipeline. After your cluster is created don’t forget to set KUBECONFIG to be able to use helm in the following step.

  2. Install Mariadb Chart

    In the Banzai Cloud Helm repository we have a ready to use chart to deploy MariaDB. In case of using Minikube you have to add this repo:

          helm repo add banzaicloud-stable
          helm install -n demo --set Release.Name=demo --set mariadbUser=petstore --set mariadbPassword=petstore --set mariadbDatabase=petstore banzaicloud-stable/mariadb

  3. Download Wildfly operator definitions and deploy:

          kubectl create -f rbac.yaml
          kubectl create -f operator.yaml
  4. Deploy Petstore Application

    This time we are going with the default configuration, default user/password, we only need to pass MariaDB connection properties dataSourceConfig. The below resource definition will start two instances of WildflyAppServer in standalone mode, with Petstore application deployed.

          cat > example-app.yaml <<EOF
          apiVersion: ""
          kind: "WildflyAppServer"
            name: "wildfly-example"
              app: my-label
            nodeCount: 2
            image: "banzaicloud/wildfly-jee-petstore:0.1.6"
            applicationPath: "applicationPetstore"
              app: my-label
                hostName: "demo-mariadb"
                databaseName: "petstore"
                jdniName: "java:jboss/datasources/ExampleDS"
                user: "petstore"
                password: "petstore"
          kubectl apply -f example-app.yaml

  5. Get Management Console and Application URL’s

          kubectl describe WildflyAppServer
            External Addresses:
    Management Console is reachable at: while Petstore application at:

Now you can resize your Wildfly cluster updating spec.nodeCount with kubectl edit WildflyAppServer wildfly-example also you manually delete pods, meanwhile checking that your Petstore application is still available and http session is still alive, for ex. you are still logged in.

Our next goal is to have a full spotguide support for standard JEE applications, enabling deployment of these applications with minimal code change, having logging + monitoring + autoscaling out of the box using Pipeline, so stay tuned. This would be a kind of first step towards converting your Enterprise applications to a microservices oriented architecture in a semi-automated way, where you would get automatic recommendations on how to split your monolith in several smaller services.

If you are interested follow our next series on GitHub, LinkedIn or Twitter:



comments powered by Disqus