Configuring a cluster


To ensure seamless failover, configure the mid tier on an Apache Tomcat cluster. BMC does not support the clustering for web servers other than Tomcat.

You might have the following questions about Tomcat cluster configuration.

How do I configure a Tomcat cluster if I have multiple network interfaces?

You have the following options:

  • Edit the server.xml file located in the TomcatInstallationFolder\conf folder. Change the address value of the Receiver tag from auto  to the IP address of the network interface card to which you want the session replication to take place.
    For example, if you have two network interface cards (eth1 with 10.x.x.x as the IP address and eth2 with 172.x.x.x as the IP address) and you want the session replication to happen on eth2, you must set the address value to 172.x.x.x:

Original address value in the Receiver tag: 

<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

New address value in the Receiver tag:

 <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="172.x.x.x"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

For information about clustering and session replication, see the Apache Tomcat documentation at https://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html.

  • Disable the network interface that you do not intend to use. For example, if you want to disable eth1, use any one of the following commands at the command prompt:
    • ifconfig eth1 down
    • ifdown eth1
      Verify your etc/hosts file, and remove any bad entries that are present.
I see the IP address of the local host against the memberAdded entry in the cluster log. What do I do?
  • Edit the server.xml file located in the TomcatInstallationFolder\conf folder. Change the address value of the Receiver tag from auto to the IP address of the host to which you want the session replication to take place.

Original address value in the Receiver tag:

<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

New address value in the Receiver tag:

 <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="172.x.x.x"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

For information about clustering and session replication, see the Apache Tomcat documentation at https://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html.

To configure a cluster

The AR System installer allows you only to install or upgrade the Mid Tier but not configure a Tomcat cluster. To add the mid tier in a Tomcat cluster, install Mid Tiers on different virtual computers and configure the cluster manually with the following procedure.

  1. Stop Tomcat.
  2. Open the arsys.xml file in the TomcatInstallationFolder\conf\Catalina\localhost folder in a text editor, and replace the <Manager> tag with the following XML:

    <Manager className="org.apache.catalina.ha.session.DeltaManager"
      expireSessionsOnShutdown="false"
      notifyListenersOnReplication="true" />
  3. Open the server.xml file in the TomcatInstallationFolder\conf folder in a text editor and make the following changes:
    • Locate the <Engine> tag and add the jvmRoute attribute.
      Use Node1 for the first node, Node2 for the second, and so on.
      <Engine name="Catalina" defaultHost="localhost" jvmRoute="Node1">
    • Copy the <Cluster> tag information from the attached sample server.xml file, and paste it after the <Engine> tag in your server.xml file.

      If you are using Tomcat version 8.0.38 and above, remove the following XML tag from the server.xml file.

      <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>

      For versions of Tomcat prior to 8.0.38, you can use the server.xml file as is without making any changes.

      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
      <Manager className="org.apache.catalina.ha.session.DeltaManager"
                         expireSessionsOnShutdown="false"
                         notifyListenersOnReplication="true"/>
      <Channel className="org.apache.catalina.tribes.group.GroupChannel">
               <Membership className="org.apache.catalina.tribes.
      membership.McastService"

                              address="<ipAddress>"
                              port="45570"
                              frequency="500"
                              dropTime="3000"/>
             <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                            address="auto"
                            port="4000"
                            autoBind="100"
                            selectorTimeout="5000"
                            maxThreads="6"/>
             <Sender className="org.apache.catalina.tribes.transport.
      ReplicationTransmitter"
      >
                   <Transport className="org.apache.catalina.tribes.transport.
      nio.PooledParallelSender"
      />
             </Sender>
      <Interceptor className="org.apache.catalina.tribes.group.
      interceptors.TcpFailureDetector"
      />
           <Interceptor className="org.apache.catalina.tribes.group.
      interceptors.MessageDispatchInterceptor"
      />
      </Channel>
      <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
      filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt|.*\.jsp|.*\.swf|.*BackChannel/*|.*./resources/.*|.*./sharedresources/.*|.*./plugins/.*|.*./pluginsignal/.*|.*./imagepool/.*"
      statistics="false" />
      <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
      <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
           tempDir="/tmp/war-temp/"
           deployDir="/tmp/war-deploy/"
           watchDir="/tmp/war-listen/"
           watchEnabled="false"/>
      <ClusterListener className="org.apache.catalina.ha.session.
      JvmRouteSessionIDBinderListener"
      />
      <ClusterListener className="org.apache.catalina.ha.session.
      ClusterSessionListener"
      />
      </Cluster>

      The default value for the <ipAddress> variable is 228.0.0.4. This IP address and port number defines a unique cluster. However, for each member of a cluster, ensure that they have the same multicast IP address and port number.

  4. Enter the multicast IP address of the host on which you want the session replication to take place in the Address value instead of auto.
    For example, in the <Receiver> tag, modify address="auto" to address="172.x.x.x".
  5. (Optional) Add two new file handlers 5cluster.org.apache.juli.FileHandler and 6cluster.org.apache.juli.FileHandler in the handlers section to the logging.properties file.

    handlers = 1catalina.org.apache.juli.FileHandler, \
              2localhost.org.apache.juli.FileHandler, \
              3manager.org.apache.juli.FileHandler, \
              5cluster.org.apache.juli.FileHandler, \
      6cluster.org.apache.juli.FileHandler, \
       java.util.logging.ConsoleHandler
  6. (Optional) To increase the cluster logging, change the log level to FINE for the following file handlers in the logging.properties file located in the TomcatInstallationFolder/conf folder:
    5cluster.org.apache.juli.FileHandler

    6cluster.org.apache.juli.FileHandler

    5cluster.org.apache.juli.FileHandler.level = INFO
    5cluster.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
    5cluster.org.apache.juli.FileHandler.prefix = cluster.
     
    6cluster.org.apache.juli.FileHandler.level = INFO
    6cluster.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
    6cluster.org.apache.juli.FileHandler.prefix = ha.
     
    org.apache.catalina.tribes.MESSAGES.level = INFO
    org.apache.catalina.tribes.MESSAGES.handlers = 5cluster.org.apache.juli.FileHandler

    org.apache.catalina.tribes.level = INFO
    org.apache.catalina.tribes.handlers = 5cluster.org.apache.juli.FileHandler


    org.apache.catalina.ha.level = INFO
    org.apache.catalina.ha.handlers = 6cluster.org.apache.juli.FileHandler
  7. Restart Tomcat.
  8. Verify whether the node is added to the cluster:
    1. Open the ha.date.log file located in /opt/apache/tomcat7.0/logs for Linux and C:\Program Files\Apache Software Foundation\Tomcat7.0\logs for Windows.
    2. Search for memberAdded entry.
      If you added four mid tiers to the cluster, you should see three memberAdded entries and their IP addresses in the file for the other three members. Ensure that the IP address does not represent a localhost.
      To add an extra mid tier to the cluster, perform the same configurations on the extra (n+1) mid tier

To access the Mid Tier  configuration URL, use the HTTP URL in the following format:

http://mtMachineName:8080/arsys/shared/config/config.jsp

Adding an extra Mid Tier to a cluster

When an increase in demand requires extra capacity in a cluster, you can add an extra Mid Tier to the cluster. The newly added Mid Tier must seamlessly start serving requests and share the load along with other Mid Tiers in the cluster. Suppose that you have four mid tiers and eight AR System servers operating in a cluster, serving requests from 2,400 users. If you find that each Mid Tier is serving more than 75 percent of its capacity and you expect the load to further increase, you can add a fifth Mid Tier and distribute the load equally among all the Mid Tiers.

This procedure also applies when a Mid Tier in a cluster goes down and the AR System administrator brings it up.

Before you begin

  • Plan your deployment strategy. For example, you might have to decide whether to deploy n+1 or n+2 Mid Tiers to the cluster.
  • Ensure that you have a separate computer. If you are deploying n+2 mid tiers, ensure that you have two computers available.
  • Ensure that the computers have the same configuration as that of other mid tiers in the cluster in terms of memory, CPU usage, disk space, and so on.
  • Ensure that the computers have the same cluster configuration

    Set up the extra mid tier in the cluster but do not start it yet, and delete the midTierInstallationDirectory/cache folder before starting the mid tier.

  • To avoid the extra time required to copy the cache folder, ensure that you already have a "good copy" of the preloaded cache in the n+1 Mid Tier (for example, in the /opt/Preload_Cache folder). Ensure that you store the good copy of the cache on a local drive and not on a shared drive, which can delay copying the cache folder. 
  • Add the cache directory path to the arsystem.ehcache.midTierBackupCacheDir property in the config.properties file of the n+1 Mid Tier, for example:

    arsystem.ehcache.midTierBackupCacheDir = /opt/Preload_Cache

    If you are using a Centralized Configuration Server (CCS), ensure that the Cache Backup Directory property is set correctly from the Cache Settings page from any of the Mid Tiers in the cluster. Ensure the availability of the good copy of the cache in this folder on all mid tiers. For more information, see Backing-up-the-Mid-Tier-cache.

  • If you are using a CCS, copy the ccs.properties file in the midTierInstallationDirectory/WEB-INF/classes folder from other mid tier.

To add the extra (n+1) Mid Tier

  1. Start the n+1 Mid Tier:
    1. Start the Mid Tier to connect to the CCS server by using ccs.properties and refresh the local copy of config.properties. This action also copies the cache backup from the backup directory and starts using it automatically.
    2. After the n+1 Mid Tier is started, verify the cache folder located in the midTierInstallationDirectory/cache folder of the n+1 Mid Tier. The size of this cache folder must be the same as the good cache copy stored in the /opt/Preload_Cache folder.
  2. Ensure that the n+1 Mid Tier is added as a node to the appropriate f5 load balancer pool.
    Otherwise, f5 does not balance the load across the n+1 Mid Tier.
  3. Verify whether the newly added mid tier is serving requests by:

    • Looking at the status of the relevant node in f5 load balancer 
    • Reviewing the number of views generated in the Cache Advanced page of the newly added Mid Tier (for example, http://localhost:portnumber/arsys/shared/config/config_cache_adv.jsp)

    The newly added mid tier can now handle all requests seamlessly without causing delays and performance issues.

The Cluster ID

A cluster ID is a unique identification given to a cluster operating in the BMC Helix Innovation Suite as a service environment. All Mid Tiers within a cluster share a unique cluster ID. When you configure the CCS settings for each Mid Tier, ensure that the mid tiers within a cluster are configured with the same cluster ID. Having a cluster identified by a unique cluster ID is essential because the CCS can notify all the Mid Tiers within a cluster about changes to the global properties. For more information about configuring the cluster ID for each Mid Tier, see Configuring-the-AR-System-server-as-a-Centralized-Configuration-server.

You must decide a unique cluster ID for each cluster. For example, for Cluster1, you can add Cluster1 as the cluster ID; for Cluster2, add Cluster2; and so on.

The following diagram illustrates the use of a cluster ID.

ClusterID.png

Best practices for configuring a cluster

To improve application scalability, we recommend that you increase the File Descriptor limit in mid tiers running in a cluster. If you have a cluster of 2 Mid Tiers with 12 tenants, which is serving 3600 users, set the File Descriptor limit to 35000. (300 users is considered an average load per tenant.)

To set the File Descriptor limit, perform the following steps:

  • Edit the /etc/security/limits.conf file and add the following line:
root  soft  nofile   35000
  • Edit the /etc/bashrc file and set the unlimit value as follows:

    if [ "$EUID" = "0" ]; then

       ulimit -n 35000

    fi
  • Edit the /etc/sysctl.conf file and increase the File Descriptor limit for the system as follows:

fs.file-max = 210000

Ensure that you set the system-wide limit of File Descriptors to approximately 6 times that of the per-process limit set in the limits.conf file.

Refer to the following table for guidance on deciding the File Descriptor limit:

S. No

Component

Variable

Description

1

Tomcat connections

maxThreads setting in Tomcat HTTP Connector

maxThreads defines the maximum connections used by Tomcat when BIO Connector is used.

2

Tomcat NIO Receivers

Default setting is 6

NIO Receivers for Session Replications.

3

Tomcat and Mid Tier Class Loaders

Estimate is 1200

Class loader for JAR files loaded by Tomcat. This includes JARs in Mid Tier distribution, configured Third-party JARs, and DVF plug-ins.

4

Mid Tier properties files

1 + 1 * Number of Tenants

One for global configuration and one per tenant.

5

Mid Tier Cache files

27 x 2 = 54

27 categories x 2 files per category. Data files are always open. Index files are read into Memory at startup and closed and later written to during Mid Tier shutdown.

6

Mid Tier Cache lock file

1

Single file for Cache lock status.

7

Java API RPC connections to AR System server

arsystem.pooling_max_connections_per_server from Mid Tier Configuration * Number of Tenants

Configured by arsystem.pooling_max_connections_per_server setting in theMid Tier configuration.

8

Java API JMS Connections for CCS

1 + 1 * Number of Tenants

One for ECCS and One per tenant.

9

Attachments and Reports

Estimate is 20% * Maximum Concurrent Users on the Mid-Tier Node

Attachment files and report files created by Mid Tieron disk.

10

SSO Agent - Files

1

The SSO agent uses a single configuration file, which is queried from the SSO server.

11

SSO Agent – Connections to SSO Server

maxThreads setting in Tomcat HTTP Connector

SSO agent opens a URL connection to the SSO server when it logs in a user or validates an SSO token with the SSO server.

12

Safety Factor

25% * Sum of (#1 to #11)



Total


Sum of #1 to #12 rounded up to nearest 5000 on 6th higher side

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*