Page tree

 This topic provides information about a recommended deployment scenario for high-availability (HA) setup. This information includes the necessary deployment architecture, considerations, and configuration setting up the BMC Release Process Management (BRPM) product in HA deployment mode. 

For the information about the necessary hardware configuration, see Performance tuning.

BRPM 5.0 uses JBoss as an application server. With JBoss, you can run the BRPM server in standalone and standalone high-availability mode that supports running several application server nodes combined in one cluster. With these features, you can add more server hardware to manage your requests, use your database server more efficiently, and make your BRPM application more fail-safe.

The recommended cluster configuration uses the multicast networking. Before you begin the cluster configuration, ensure that the load balancer server and all nodes are accessible and located in the same common network.

After you configure nodes, you can use the URL and port of a load balancer to access BRPM. For example, if you use the 192.168.10.122 IP address and 6666 port for the load balancer, use the following address in your browser to access the BRPM: http://192.168.10.122:6666/brpm.

Deployment architecture for a high-availability setup

The following figure outlines the deployment architecture for a HA setup of BRPM.

Note

Starting from version 5.0.03.004, BRPM does not support HTTP.

Deployment architecture for HA setup

HA_setup

Deployment considerations

Keep in mind the following considerations while you setup the product in the HA deployment mode:

  • To achieve high availability, deploy BRPM on separate servers depending on the expected load.

  • Use a central database for high-availability deployment. For the information about databases supported by BRPM, see System requirements.
  • Install or upgrade the central database only one time during the first product installation. To avoid any possible data loss, you can do one of the following:
    • Use a new empty database for other BRPM installations. After the product installation is complete, you can optionally delete this new database and point the RPM server to your primary database.
    • On the data migration page of the BRPM installer, select the Skip database modification check box.
    For more information about BRPM installation, see Preparing for installation and Performing the installation.
  • Before you begin the cluster configuration, ensure that the load balancer server and all nodes are accessible and located in the same common network.
  • Install and configure an httpd-based load balancer on any computer in your network including BRPM nodes. 
    • 5.0.03.004 OR LATER Install and configure HAProxy.
    • 5.0.03.003 OR EARLIER Install and configure mod_cluster.
  • Complete the necessary network configuration on each node of the cluster.
  • Create a shared folder on any computer in your network and configure the central file store location on each node to ensure that all processors have access to the same automation scripts and you can manage automation processes and queue.
  • Expose the HTTPS port of the load balanced IP for both web interface and REST API outside the firewall if you need to access BRPM from a public network.

  • Do not expose the Java Messaging System outside the firewall because it is used only by the internal systems.
  • BMC recommends that you configure a local SMTP (Simple Mail Transfer Protocol) server for BRPM, which can enhance the response time of requests and provide support if the actual SMTP server is temporarily inaccessible or responds slowly.

Configuring HAProxy as a load balancer on Red Hat Enterprise Linux (RHEL) 

Before you begin, ensure that the following packages are installed:

  • gcc
  • openssl
  • openssl-devel

If not installed, use the following command to install: 

yum install gcc openssl openssl-devel

To configure HAProxy as a load balancer

  1. Download a stable version of HAProxy (version 2.0.12 in the following command).

    wget http://www.haproxy.org/download/2.0/src/haproxy-2.0.12.tar.gz
  2. Extract the downloaded file.

    tar -xzvf haproxy-2.0.12.tar.gz
  3. Go to the haproxy-2.0.12 directory.

    cd haproxy-2.0.12
  4. Compile HAProxy with SSL support.

    make TARGET=linux-glibc USE_OPENSSL=1 USE_ZLIB=1 USE_CRYPT_H=1 USE_LIBCRYPT=1 USE_NS=
  5. Install HAProxy.

    sudo make install
  6. Add the following directories and the statistics file for the HAProxy records.

    sudo mkdir -p /etc/haproxy
    sudo mkdir -p /var/lib/haproxy
    sudo touch /var/lib/haproxy/stats 
  7. Create a symbolic link for the binary to allow you to run the HAProxy commands as a normal user:

    sudo ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy 
  8. Add HAProxy as a service to the system:

    sudo cp examples/haproxy.init /etc/init.d/haproxy
    sudo chmod 755 /etc/init.d/haproxy
  9. Create the /etc/haproxy/haproxy.cfg configuration file:

    vi /etc/haproxy/haproxy.cfg
  10. Add the contents to the haproxy.cfg file. See the attached file for the sample content.

  11. Update the file for the SSL certificate as follows. You can use an existing SSL certificate or create one as described in To create an SSL certificate .  
    1. Locate the frontend http_front section.
    2. In the following line, replace filePath and pemFileName with the SSL certificate path and name.
      bind *:443 ssl crt /<filePath>/<pemFileName>
  12. Add servers to the load balancer, as follows:
    1. Locate the backend http_back section. 

      backend http_back
         balance roundrobin
         cookie _rpm_session prefix nocache
         server <hostName> <IPAddress>:<portNumber> check cookie <cookiePrefix> ssl verify none 
    2. Update hostNameIPAddressportNumbercookiePrefix. To add one or more servers, add another line starting with 'server'. For example:

      backend http_back
         balance roundrobin
         cookie _rpm_session prefix nocache
         server linux-rpm01 10.129.32.104:8443 check cookie rpm01 ssl verify none
         server linux-rpm02 10.129.32.105:8444 check cookie rpm02 ssl verify none  

      Note

      The persistence must be set to cookie to make sure that the V1 APIs are evenly distributed.

  13. Start the HAProxy service:

    sudo service haproxy restart

To create an SSL certificate 

  1. Go to the /etc/ssl directory.

    cd /etc/ssl
  2. Create the private key and certificate using openssl.

    openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt
  3. Create a .pem file.

    touch rlm-test.pem
  4. Include the private key and certificate file in a single file.

    cat privateKey.key > rlm-test.pem
    cat certificate.crt >> rlm-test.pem

Configuring mod_cluster as a load balancer 

  1. Download and unarchive the appropriate mod_cluster version to a computer in your network.
  2. Go to modClusterHome/httpd-2.2/bin, and then run the installconf file.
  3. Go to modClusterHome/httpd-2.2/conf, and then open the httpd.conf file in a text editor.
  4. In the MOD_CLUSTER_ADDS section, in the manager_module section and VirtualHost parameter, enter the IP address of your computer and the port number for the load balancer.
    For example, if your IP address is 192.168.10.122 and you want to use the 6666 port for the load balancer, your load balancer IP and port configuration is the following:

    # MOD_CLUSTER_ADDS
    # Adjust to your hostname and subnet.
    <IfModule manager_module>
    Listen 192.168.10.122:6666
    ManagerBalancerName mycluster
    <VirtualHost 192.168.10.122:6666>
  5. To set permissions for the load balancer, in the MOD_CLUSTER_ADDS section, in the Location section, enter Deny and Allow rules for IP addresses.
    For example, if you want to deny the access to the load balancer for all computers except your local network with host access range 192.168.10.1 –192.168.10.254, your load balancer access parameters are the following:

        <Location />
         Order deny,allow
         Deny from all
         Allow from 192.168.10
        </Location>

    For more information about configuring mod_custer, see mod_cluster configuration instruction.

  6. Save your changes.
  7. Start the mod_cluster in a method appropriate to your operating system:
    • (Windows) Go to modClusterHome/httpd-2.2/bin, and then run httpd.exe
    • (Linux/Solaris) On the terminal, enter the following command:
      <mod cluster home>/httpd/sbin/apachectl start

Back to top 

To configure a node 

  1. Install BMC Release Process Management or clone the server to the node computer.

  2. Stop the BMC Release Process Management service.
  3. Go to RLMhome/server/jboss/standalone and remove the following folders:
    • /tmp
    • /temp
    • /data
  4. Go to RLMhome/bin and open the following file in a text editor as appropriate for your operating system:
    • Windowsstart.bat
    • Linux/Solarisstart.sh
  5. Locate the "$EXECUTABLE" keyword.

  6. Set the following environment variables:

    CLUSTER_MODE="true"
    NODE_NAME="<nodeName>"
    NODE_IP="<IPAddress>"

    For example, if you configure the node, Automation, with the IP address, 192.168.10.148, the bottom line of the file looks like the following:

    CLUSTER_MODE="true"
    NODE_NAME="Automation"
    NODE_IP="192.168.10.148"
  7. Save your changes.
  8. VERSION 5.0.03.003 OR EARLIER Go to RLMhome/server/jboss/standelone/configuration/ and open the standalone-ha.xml file in a text editor.
    1. For the mod-cluster-config tag, add the sticky-session='true' parameter. For example:

       <mod-cluster-config advertise-socket='modcluster' connector='ajp' excluded-contexts='invoker,jbossws,juddi,console' sticky-session='true'>
    2. Find and delete all entries of <expiration max-idle='100000'/> tag from the file.


    Best practice

    To avoid any performance issues on the overloaded nodes, for example, hanging steps, change the consumer-window-size value from 1 to 0 for both InVmConnectionFactory and RemoteConnectionFactory.

  9. Save your changes.
  10. On the Automation node, go to RLMhome/releases/<RPMversion>/RPM/portal.war/WEB-INF/config and open the wildfly.yml file with a text editor.

    1. The text editor opens as follows:

      If the AutomationHandler concurrency parameter is set to 0, this node acts as an UI node.
      For the node to act as an automation node, you must set the value of the AutomationHandler concurrency to a number corresponding to the number of parallel jobs.

  11. Under ScheduledJobHandler, set the concurrency parameter to the number of scheduled jobs that must be processed without delay. The default value is 0:

    /queue/scheduled_job:

        ScheduledJobHandler:

          concurrency: 0

    Note

    Under heavy usage of scheduling, you must increase the ScheduledJobHandler concurrency on the UI node.

    For example, if there are 15 jobs scheduled at the same time, set the ScheduledJobHandler concurrency parameter to 15 per UI node. All 15 jobs are processed at the same time without delay.


  12. Save your changes.
  13. Start the BMC Release Process Management service. For more information, see Start the BMC Release Process Management service.

To configure the central file store location on a node 

  1. Stop the BMC Release Process Management service.
  2. Depending on the BRPM version you are using, navigate to the following directory: 
    • VERSION 5.0.03.004 OR LATER RLMhome/releases/yourCurrentVersion/RPM/portal.war/WEB-INF/config
    • VERSION 5.0.03.003 OR EARLIER RLMhome/releases/yourCurrentVersion/RPM/config
  3. Open the automation_settings.rb file in a text editor.
  4. For AUTOMATION_JAVA_OPTS, enter the amount of memory used by the JRuby automation processes.
    For example:

    $AUTOMATION_JAVA_OPTS = "-XX:PermSize=24m -Xmx64m -Xms16m -Xss2048k"
  5. For OUTPUT_BASE_PATH, enter the path to the shared folder for the automation scripts.
    For example:

    $OUTPUT_BASE_PATH = "/mnt/samba/Cluster/automation_output"  

    Note

      If you created a shared folder on one of your BRPM cluster nodes, enter the same URL to this folder as for remote nodes. Do not enter the local path to this shared folder.

  6. Save your changes in the automation_settings.rb file.
  7. Depending on the BRPM version you are using, do one of the following:
    • VERSION 5.0.03.004 OR LATER
      1. Navigate to the RLMhome/releases/yourCurrentVersion/RPM/portal.war/WEB-INF/config directory and open the wildfly.yml file with a text editor.
      2. In messaging, for the concurrency parameter, enter the number of processors for the automation queue. Default is 6.

        For example, if you want to set 10 processors for the automation queue, your concurrency parameter is the following:

        messaging:
          /queue/backgroundable/automation:
            AutomationHandler:
              concurrency: 10
    • VERSION 5.0.03.003 OR EARLIER
      1.  Navigate to the RLMhome/releases/yourCurrentVersion/RPM/config directory and open the torquebox.yml file with a text editor.
      2. In messaging, for the concurrency parameter, enter the number of processors for the automation queue. Default is 3.

        For example, if you want to set 8 processors for the automation queue, your concurrency parameter is the following:

        messaging:
          /queues/backgroundable/automation:
            AutomationHandler:
              concurrency: 8
  8. Save your changes to the file.
  9. Start the BMC Release Process Management service.

Back to top