High-availability deployment
This topic provides information about a recommended deployment scenario for high-availability (HA) setup. This information includes the necessary deployment architecture, considerations, and configuration setting up the BMC Release Process Management (BRPM) product in HA deployment mode.
For the information about the necessary hardware configuration, see Performance-tuning.
BRPM 5.0 uses JBoss as an application server. With JBoss, you can run the BRPM server in standalone and standalone high-availability mode that supports running several application server nodes combined in one cluster. With these features, you can add more server hardware to manage your requests, use your database server more efficiently, and make your BRPM application more fail-safe.
The recommended cluster configuration uses the multicast networking. Before you begin the cluster configuration, ensure that the load balancer server and all nodes are accessible and located in the same common network.
After you configure nodes, you can use the URL and port of a load balancer to access BRPM. For example, if you use the 192.168.10.122 IP address and 6666 port for the load balancer, use the following address in your browser to access the BRPM: http://192.168.10.122:6666/brpm.
Deployment architecture for a high-availability setup
The following figure outlines the deployment architecture for a HA setup of BRPM.
Deployment architecture for HA setup
Deployment considerations
Keep in mind the following considerations while you setup the product in the HA deployment mode:
- To achieve high availability, deploy BRPM on separate servers depending on the expected load.
- Use a central database for high-availability deployment. For the information about databases supported by BRPM, see System-requirements.
Install or upgrade the central database only one time during the first product installation. To avoid any possible data loss, you can do one of the following:
- Use a new empty database for other BRPM installations. After the product installation is complete, you can optionally delete this new database and point the BRPM server to your primary database.
- On the data migration page of the BRPM installer, select the Skip database modification check box.
For more information about BRPM installation, see Preparing-for-installation and Performing-the-installation.
- Before you begin the cluster configuration, ensure that the load balancer server and all nodes are accessible and located in the same common network.
- Install and configure httpd-based load balancer HAProxy on any computer in your network including BRPM nodes.
- Complete the necessary network configuration on each node of the cluster.
- Create a shared folder on any computer in your network and configure the central file store location on each node to ensure that all processors have access to the same automation scripts and you can manage automation processes and queue.
- Expose the HTTPS port of the load balanced IP for both web interface and REST API outside the firewall if you need to access BRPM from a public network.
- Do not expose the Java Messaging System outside the firewall because it is used only by the internal systems.
- BMC recommends that you configure a local SMTP (Simple Mail Transfer Protocol) server for BRPM, which can enhance the response time of requests and provide support if the actual SMTP server is temporarily inaccessible or responds slowly.
Configuring HAProxy as a load balancer on Red Hat Enterprise Linux (RHEL)
Before you begin, ensure that the following packages are installed:
- gcc
- openssl
- openssl-devel
If not installed, use the following command to install:
To configure HAProxy as a load balancer
Download a stable version of HAProxy (version 2.0.12 in the following command).
wget http://www.haproxy.org/download/2.0/src/haproxy-2.0.12.tar.gzExtract the downloaded file.
tar -xzvf haproxy-2.0.12.tar.gzGo to the haproxy-2.0.12 directory.
cd haproxy-2.0.12Compile HAProxy with SSL support.
make TARGET=linux-glibc USE_OPENSSL=1 USE_ZLIB=1 USE_CRYPT_H=1 USE_LIBCRYPT=1 USE_NS=Install HAProxy.
sudo make installAdd the following directories and the statistics file for the HAProxy records.
sudo mkdir -p /etc/haproxy
sudo mkdir -p /var/lib/haproxy
sudo touch /var/lib/haproxy/statsCreate a symbolic link for the binary to allow you to run the HAProxy commands as a normal user:
sudo ln -s /usr/local/sbin/haproxy /usr/sbin/haproxyAdd HAProxy as a service to the system:
sudo cp examples/haproxy.init /etc/init.d/haproxy
sudo chmod 755 /etc/init.d/haproxyCreate the /etc/haproxy/haproxy.cfg configuration file:
vi /etc/haproxy/haproxy.cfg- Add the contents to the haproxy.cfg file.
- Update the file for the SSL certificate as follows. You can use an existing SSL certificate or create one as described in To create an SSL certificate.
- Locate the frontend http_front section.
- In the following line, replace filePath and pemFileName with the SSL certificate path and name.
bind *:443 ssl crt /<filePath>/<pemFileName>
- Add servers to the load balancer, as follows:
Locate the backend http_back section.
backend http_back
balance roundrobin
cookie _rpm_session prefix nocache
server <hostName> <IPAddress>:<portNumber> check cookie <cookiePrefix> ssl verify noneUpdate hostName, IPAddress, portNumber, cookiePrefix. To add one or more servers, add another line starting with 'server'. For example:
backend http_back
balance roundrobin
cookie _rpm_session prefix nocache
server linux-rpm01 10.129.32.104:8443 check cookie rpm01 ssl verify none
server linux-rpm02 10.129.32.105:8444 check cookie rpm02 ssl verify none
Start the HAProxy service:
sudo service haproxy restart
To create an SSL certificate
Go to the /etc/ssl directory.
cd /etc/sslCreate the private key and certificate using openssl.
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crtCreate a .pem file.
touch rlm-test.pemInclude the private key and certificate file in a single file.
cat privateKey.key > rlm-test.pem
cat certificate.crt >> rlm-test.pem
Configuring mod_cluster as a load balancer
- Download and unarchive the appropriate mod_cluster version to a computer in your network.
- Go to modClusterHome/httpd-2.2/bin, and then run the installconf file.
- Go to modClusterHome/httpd-2.2/conf, and then open the httpd.conf file in a text editor.
In the MOD_CLUSTER_ADDS section, in the manager_module section and VirtualHost parameter, enter the IP address of your computer and the port number for the load balancer.
For example, if your IP address is 192.168.10.122 and you want to use the 6666 port for the load balancer, your load balancer IP and port configuration is the following:# MOD_CLUSTER_ADDS
# Adjust to your hostname and subnet.
<IfModule manager_module>
Listen 192.168.10.122:6666
ManagerBalancerName mycluster
<VirtualHost 192.168.10.122:6666>To set permissions for the load balancer, in the MOD_CLUSTER_ADDS section, in the Location section, enter Deny and Allow rules for IP addresses.
For example, if you want to deny the access to the load balancer for all computers except your local network with host access range 192.168.10.1 –192.168.10.254, your load balancer access parameters are the following:<Location />
Order deny,allow
Deny from all
Allow from 192.168.10
</Location>For more information about configuring mod_custer, see mod_cluster configuration instruction.
- Save your changes.
- Start the mod_cluster in a method appropriate to your operating system:
- (Windows) Go to modClusterHome/httpd-2.2/bin, and then run httpd.exe
- (Linux/Solaris) On the terminal, enter the following command:
<mod cluster home>/httpd/sbin/apachectl start
To configure a node
- Install BMC Release Process Management or clone the server to the node computer.
- Stop the BMC Release Process Management service.
- Go to RLMhome/server/jboss/standalone and remove the following folders:
- /tmp
- /temp
- /data
- Go to RLMhome/bin and open the following file in a text editor as appropriate for your operating system:
- Windows: start.bat
- Linux/Solaris: start.sh
- Locate the "$EXECUTABLE" keyword.
Set the following environment variables:
CLUSTER_MODE="true"
NODE_NAME="<nodeName>"
NODE_IP="<IPAddress>"For example, if you configure the node, Automation, with the IP address, 192.168.10.148, the bottom line of the file looks like the following:
CLUSTER_MODE="true"
NODE_NAME="Automation"
NODE_IP="192.168.10.148"- Save your changes.
Go to RLMhome/server/jboss/standelone/configuration/ and open the standalone-ha.xml file in a text editor.
For the mod-cluster-config tag, add the sticky-session='true' parameter. For example:
<mod-cluster-config advertise-socket='modcluster' connector='ajp' excluded-contexts='invoker,jbossws,juddi,console' sticky-session='true'>- Find and delete all entries of <expiration max-idle='100000'/> tag from the file.
- Save your changes.
- On the Automation node, go to RLMhome/releases/<RPMversion>/RPM/portal.war/WEB-INF/config and open the wildfly.yml file with a text editor.
The text editor opens as follows:
If the AutomationHandler concurrency parameter is set to 0, this node acts as an UI node.
For the node to act as an automation node, you must set the value of the AutomationHandler concurrency to a number corresponding to the number of parallel jobs.
Under ScheduledJobHandler, set the concurrency parameter to the number of scheduled jobs that must be processed without delay. The default value is 0:
/queue/scheduled_job:ScheduledJobHandler:concurrency: 0- Save your changes.
- Start the BMC Release Process Management service. For more information, see Start the BMC Release Process Management service.
To configure the central file store location on a node
- Stop the BMC Release Process Management service.
- Navigate to the following directory:
- RLMhome/releases/yourCurrentVersion/RPM/portal.war/WEB-INF/config
- Open the automation_settings.rb file in a text editor.
For AUTOMATION_JAVA_OPTS, enter the amount of memory used by the JRuby automation processes.
For example:$AUTOMATION_JAVA_OPTS = "-XX:PermSize=24m -Xmx64m -Xms16m -Xss2048k"For OUTPUT_BASE_PATH, enter the path to the shared folder for the automation scripts.
For example:$OUTPUT_BASE_PATH = "/mnt/samba/Cluster/automation_output"- Save your changes in the automation_settings.rb file.
- Do the following:
- Navigate to the RLMhome/releases/yourCurrentVersion/RPM/portal.war/WEB-INF/config directory and open the wildfly.yml file with a text editor.
In messaging, for the concurrency parameter, enter the number of processors for the automation queue. Default is 6.
For example, if you want to set 10 processors for the automation queue, your concurrency parameter is the following:messaging:
/queue/backgroundable/automation:
AutomationHandler:
concurrency: 10
- Save your changes to the file.
- Start the BMC Release Process Management service.