Mateus Müller

Disseminando conhecimento :)

07 Aug 2020

How to setup JBoss on Domain mode with mod_jk for load balancing

This tutorial is based on this post from the Red Hat blog, which I truly recommend you to go through.

Environment

First of all, this lab is just for study purposes with an all-in-one configuration with 3 JBoss instances (master, slave1 and slave2) + httpd with mod_jk for load balancing.

I am using an EC2 instance with 8GB of RAM and 4 vCPUs with CentOS 7 OS. You can use a local VM too.

Prerequisites

Install some requirements for compilation and Java to run JBoss:

$ sudo yum install httpd-devel kernel-devel kernel-headers gcc java-11-openjdk -y

Cool, now we need to install mod_jk but this is not available through the yum repository. The mod_jk module is a part of the Tomcat Connectors package and there is a pretty straight forward tutorial that you can follow here.

Download the Tomcat Connectors here (the tar.gz one)

Follow the steps to install:

$ tar xvzf tomcat-connectors-1.2.48-src.tar.gz
$ cd tomcat-connectors-1.2.48-src/native/
$ LDFLAGS=-lc ./configure -with-apxs=/bin/apxs
$ make
$ sudo cp ./apache-2.0/mod_jk.so /etc/httpd/modules

If you face issues with “apxs” you can run “which apxs” to find the proper directory location.

For this lab I would recommend to disable SELinux (just for testing purposes). You might face issues when binding to different ports.

$ sudo setenforce 0

If you are running CentOS make sure you open the proper ports from firewalld or disable firewalld.

$ sudo systemctl stop firewalld

This lab has been tested with JBoss Wildfly 20.0.1.Final.

What exactly do we want?

This is a pretty simple configuration to demonstrate how the JBoss cluster works, where the first instance will be our domain master, and the other two will connect to the domain master for a centralized management.

Each slave host will have two servers, one will have the full profile and another one with full-ha profile.

There will be two server groups: main-server-group and other-server-group.

The main-server-group will have server-five (from slave2) and server-two (from slave1).

The other-server-group will have server-six (from slave2) and server-three (from slave1).

What is important to understand here is that only other-server-group will have the capacity to balance the requests between the servers as they will use the full-ha profile which enables the AJP protocol.

You can change the server names and server group names to whatever you want to, I am just keeping the default ones because I am lazy.

Initial Domain Controller configuration

First we will start by configuring the master one.

Download the Wildfly here. Extract it and create a folder called master inside /opt.

$ sudo wget -v https://download.jboss.org/wildfly/20.0.1.Final/wildfly-20.0.1.Final.tar.gz -O /opt/wildfly.tar.gz
$ sudo tar xvzf /opt/wildfly.tar.gz && mv /opt/wildfly-* /opt/master

The first thing to do is to update the admin password as we do not know it, right?

$ sudo cd /opt/master/bin
$ sudo ./add-user.sh
  • a) select “Management User”
  • b) type the user “admin”
  • c) select “Update the existing user password and roles”
  • d) type the new password
  • e) for “What groups do you want this user to belong to?” select the option “None”
  • f) for “Is this new user going to be used for one AS process to connect to another AS process?” select the option “No”

Cool, now at least you know the password.

Each of the slaves will have a different directory with it’s own configuration files, so create them.

$ sudo cp -rv /opt/master/domain /opt/slave1
$ sudo cp -rv /opt/master/domain /opt/slave2

Now you have the three directories master, slave1 and slave2 completely isolated.

We will start configuring the domain one.

** I suppose you already now how to use a text editor like Vim or Nano.

By default the JBoss configuration will only bind locally on loopback address and not externally, so you cannot access the http management interface through your browser. To change that, replace every occurrence of the IP 127.0.0.1 by the IP address of your external interface. In my case this is 172.31.125.224.

Edit the /opt/wildfly/domain/configuration/host.xml to change this.

You can do it either with a text editor or using sed. I am using sed as there is more than one occurrence.

$ sudo sed -i 's/127\.0\.0\.1/172\.31\.125\.224/g' /opt/wildfly/domain/configuration/host.xml

Secondly, find this block of code on the same file - now I am using Vim!

<management-interfaces>
    <http-interface security-realm="ManagementRealm">
        <http-upgrade enabled="true"/>
        <socket interface="management" port="${jboss.management.http.port:9990}"/>
    </http-interface>
</management-interfaces>

Notice that only the management port (9990) is configured, which is the one you will use to access the web interface through your browser.

You have to add the native port which is used to interconnect the slaves on the master node.

So the result will be something like this:

<management-interfaces>
    <http-interface security-realm="ManagementRealm">
        <http-upgrade enabled="true"/>
        <socket interface="management" port="${jboss.management.http.port:9990}"/>
    </http-interface>
    <native-interface security-realm="ManagementRealm">
        <socket interface="management" port="${jboss.management.native.port:9999}"/>
    </native-interface>
</management-interfaces>

Thirdly, find the servers section and delete all of them. I decided to keep the domain controller without any servers and be only a domain controller. The servers will actually come from our slaves.

Delete this:

<servers>
    <server name="server-one" group="main-server-group"/>
    <server name="server-two" group="main-server-group" auto-start="true">
        <jvm name="default"/>
        <socket-bindings port-offset="150"/>
    </server>
    <server name="server-three" group="other-server-group" auto-start="false">
        <jvm name="default"/>
        <socket-bindings port-offset="250"/>
    </server>
</servers>

Cool, now you can easily start the domain controller calling the domain.sh script - I am using “&” to put it as a background process.

$ sudo /opt/master/bin/domain.sh &

The web interface should be accessible on http://ipaddress:9990. In my case, I am using the public IP from the EC2 instance at amazon. If you do the same, make sure there is no security group dropping the connection.

If you face a timeout error, make sure the port “9990” is open on the firewall too.

Slaves Configuration

Now we need to change some few things on the slaves to connect to the master. Remember that all of them are running on the same host, so it is basically always the same IP address.

Make sure you have ran the same sed command on the slaves:

$ sudo sed -i 's/127\.0\.0\.1/172\.31\.125\.224/g' /opt/slave1/configuration/host.xml
$ sudo sed -i 's/127\.0\.0\.1/172\.31\.125\.224/g' /opt/slave2/configuration/host.xml

Now pay attention, I am going to describe the steps you have to execute on both slave1 and slave2 as it is exactly the same configuration. You just have to change some port numbers and the hostname for them to not be equal.

Start by editing the host.xml from slave1:

$ sudo vim /opt/slave1/configuration/host.xml

Change the name accordinly.

From:

<host xmlns="urn:jboss:domain:13.0" name="master">

To something like:

<host xmlns="urn:jboss:domain:13.0" name="slave1">

Do the same for slave2, but of course use the “slave2” name instead of slave1.

Find the domain-controller block and change from:

<domain-controller>
    <local/>
</domain-controller>

To something like:

<domain-controller>
   <remote host="${jboss.domain.master.address:172.31.125.224}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
</domain-controller>

The configuration above will ensure our slaves will connect to the IP address 172.31.125.224 from domain controller on native port 9999 to create the cluster. If you are configuring on different VMs, this must be the IP address from the domain controller. Again, do the same for slave2.

The whole configuration management will be done from domain controller http interface (9990) and therefore we do not need the http management interface on slave nodes, right? Find the management-interfaces block and remove it. Keep only the native port.

From:

<management-interfaces>
    <http-interface security-realm="ManagementRealm">
        <http-upgrade enabled="true"/>
        <socket interface="management" port="${jboss.management.http.port:9990}"/>
    </http-interface>
</management-interfaces>

To:

<management-interfaces>
    <native-interface security-realm="ManagementRealm">
        <socket interface="management" port="${jboss.management.native.port:19999}"/>
    </native-interface>
</management-interfaces>

The slave1 should use port 19999 and slave2 should use 29999 so they do not bind to the same port.

On both slaves, find the servers section and remove server-one. This is not useful for our lab.

<server name="server-one" group="main-server-group"/>

Cool, now we just have to setup the servers.

Keep the slave1 as it is.

On slave 2 we will change the server’s name. Why? Well, we are using the same configuration file on slave1 and slave2, so they have the very same names for the servers and this will cause a duplicated names issue. They will not even start with duplicated names.

So on slave2, change the server-two to server-five and server-three to server-six.

$ sudo sed -i 's/server-two/server-five/g' /opt/slave2/configuration/host.xml
$ sudo sed -i 's/server-three/server-six/g' /opt/slave2/configuration/host.xml

There is something else… The offset. All the ports receive an offset to distribute port numbers. For example, if the offset is set to 150 and the port starts with 8000, the port 8150 will be given. If we have the same offset on both slaves, they will bind exactly to the same ports and we will face some issues.

Thus, edit the host.xml again on slave 2 and change the offset. From 150 to 350 and from 250 to 450.

<socket-bindings port-offset="250"/>

After that, no more conflicts. We can start both servers.

$ sudo /opt/master/bin/domain.sh -Djboss.domain.base.dir=/opt/slave1 &
$ sudo /opt/master/bin/domain.sh -Djboss.domain.base.dir=/opt/slave2 &

Now take a look on the web interface Runtime -> Hosts. You should see the master and the two slaves up and running.

Please do notice that the servers from other-server-group are set to auto-start=“false”, so they will not start automatically. Go through Runtime -> Servers Groups -> other-server-group -> click on the dropdown and select Start.

Deploy

Nice, now we need to deploy a sample application for our testing purpose.

We should try something with the command line?!

I got my sample application from here.

$ sudo wget -v https://github.com/AKSarav/SampleWebApp/raw/master/dist/SampleWebApp.war
$ sudo /opt/master/bin/jboss-cli.sh
[disconnected /] connect 172.31.125.224
[domain@172.31.125.224:9990 /] deploy /root/SampleWebApp.war --server-groups=other-server-group

Make sure you are pointing the correct file system location for SampleWebApp.war. Also notice we are deploying the application for other-server-group which will automatically distribute the application for server-six and server-three, where the full-ha profile is applied so we can make the load balancing.

mod_jk stuff

So it is time to setup our load balancing.

I truly suggest you to go through this blog too. It shows exactly how to setup mod_jk with JBoss.

Inside /etc/httpd/conf, create a file called workers.properties with this content:

worker.list=loadbalancer,status

# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8259
worker.node1.host=172.31.125.224
worker.node1.type=ajp13
worker.node1.ping_mode=A
worker.node1.lbfactor=1

# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8459
worker.node2.host=172.31.125.224
worker.node2.type=ajp13
worker.node2.ping_mode=A
worker.node2.lbfactor=1

# Load-balancing behavior
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1

# Status worker for managing load balancer
worker.status.type=status

Pay attention on worker.node1.host that must be the IP address from the VM and the worker.node1.port that should be the AJP port from the servers with full-ha profile. You can find this information on the web interface.

Find the AJP port on Runtime -> Host -> Server -> Open Ports. Each server will have one (if using the full-ha profile)

Secondly, inside /etc/httpd/conf.d you have to create a modjk.conf with the following content:

LoadModule jk_module modules/mod_jk.so
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel debug
JkMount /* loadbalancer
JkShmFile logs/jk.shm

Restart the Apache web server.

$ sudo systemctl restart httpd

It will pass any request to the load balancer and distribute on AJP ports.

You can try it out by opening http://ipaddress/SampleWebApp.

I hope you enjoyed this lab, please leave a comment below. :)

Comentários Disqus