Configuring a cluster
To configure a cluster
The installer allows you only to install or upgrade the but not configure a Tomcat cluster. To add the mid tier in a Tomcat cluster, install s on different virtual computers and configure the cluster manually with the following procedure.
- Stop Tomcat.
Open the arsys.xml file in the TomcatInstallationFolder\conf\Catalina\localhost folder in a text editor, and replace the <Manager> tag with the following XML:
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true" />- Open the server.xml file in the TomcatInstallationFolder\conf folder in a text editor and make the following changes:
- Locate the <Engine> tag and add the jvmRoute attribute.
Use Node1 for the first node, Node2 for the second, and so on.
<Engine name="Catalina" defaultHost="localhost" jvmRoute="Node1"> Copy the <Cluster> tag information from the attached sample file, and paste it after the <Engine> tag in your server.xml file.
If you are using Tomcat version 8.0.38 and above, remove the following XML tag from the server.xml file.
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>For versions of Tomcat prior to 8.0.38, you can use the server.xml file as is without making any changes.
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.
membership.McastService"
address="<ipAddress>"
port="45570"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.
ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.
nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.
interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.
interceptors.MessageDispatchInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt|.*\.jsp|.*\.swf|.*BackChannel/*|.*./resources/.*|.*./sharedresources/.*|.*./plugins/.*|.*./pluginsignal/.*|.*./imagepool/.*"
statistics="false" />
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.
JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.
ClusterSessionListener"/>
</Cluster>The default value for the <ipAddress> variable is 228.0.0.4. This IP address and port number defines a unique cluster. However, for each member of a cluster, ensure that they have the same multicast IP address and port number.
- Locate the <Engine> tag and add the jvmRoute attribute.
- Enter the multicast IP address of the host on which you want the session replication to take place in the Address value instead of auto.
For example, in the <Receiver> tag, modify address="auto" to address="172.x.x.x". (Optional) Add two new file handlers 5cluster.org.apache.juli.FileHandler and 6cluster.org.apache.juli.FileHandler in the handlers section to the logging.properties file.
handlers = 1catalina.org.apache.juli.FileHandler, \
2localhost.org.apache.juli.FileHandler, \
3manager.org.apache.juli.FileHandler, \
4host-manager.org.apache.juli.AsyncFileHandler\
5cluster.org.apache.juli.FileHandler, \
6cluster.org.apache.juli.FileHandler, \
java.util.logging.ConsoleHandler(Optional) To increase the cluster logging, change the log level to FINE for the following file handlers in the logging.properties file located in the TomcatInstallationFolder/conf folder:
5cluster.org.apache.juli.FileHandler6cluster.org.apache.juli.FileHandler
5cluster.org.apache.juli.FileHandler.level = INFO
5cluster.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
5cluster.org.apache.juli.FileHandler.prefix = cluster.
6cluster.org.apache.juli.FileHandler.level = INFO
6cluster.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
6cluster.org.apache.juli.FileHandler.prefix = ha.
org.apache.catalina.tribes.MESSAGES.level = INFO
org.apache.catalina.tribes.MESSAGES.handlers = 5cluster.org.apache.juli.FileHandler
org.apache.catalina.tribes.level = INFO
org.apache.catalina.tribes.handlers = 5cluster.org.apache.juli.FileHandler
org.apache.catalina.ha.level = INFO
org.apache.catalina.ha.handlers = 6cluster.org.apache.juli.FileHandler- Restart Tomcat.
- Verify whether the node is added to the cluster:
- Open the ha.date.log file located in /opt/apache/tomcat7.0/logs for Linux and C:\Program Files\Apache Software Foundation\Tomcat7.0\logs for Windows.
- Search for memberAdded entry.
If you added four mid tiers to the cluster, you should see three memberAdded entries and their IP addresses in the file for the other three members. Ensure that the IP address does not represent a localhost.
To add an extra mid tier to the cluster, perform the same configurations on the extra (n+1) mid tier
To access the configuration URL, use the HTTP URL in the following format:
http://mtMachineName:8080/arsys/shared/config/config.jsp
Adding an extra to a cluster
When an increase in demand requires extra capacity in a cluster, you can add an extra to the cluster. The newly added must seamlessly start serving requests and share the load along with other s in the cluster. Suppose that you have four mid tiers and eight s operating in a cluster, serving requests from 2,400 users. If you find that each is serving more than 75 percent of its capacity and you expect the load to further increase, you can add a fifth and distribute the load equally among all the s.
This procedure also applies when a in a cluster goes down and the administrator brings it up.
Before you begin
- Plan your deployment strategy. For example, you might have to decide whether to deploy n+1 or n+2 s to the cluster.
- Ensure that you have a separate computer. If you are deploying n+2 mid tiers, ensure that you have two computers available.
- Ensure that the computers have the same configuration as that of other mid tiers in the cluster in terms of memory, CPU usage, disk space, and so on.
Ensure that the computers have the same cluster configuration.
Set up the extra mid tier in the cluster but do not start it yet, and delete the midTierInstallationDirectory/cache folder before starting the mid tier.
- To avoid the extra time required to copy the cache folder, ensure that you already have a "good copy" of the preloaded cache in the n+1 (for example, in the /opt/Preload_Cache folder). Ensure that you store the good copy of the cache on a local drive and not on a shared drive, which can delay copying the cache folder.
Add the cache directory path to the arsystem.ehcache.midTierBackupCacheDir property in the config.properties file of the n+1 , for example:
arsystem.ehcache.midTierBackupCacheDir = /opt/Preload_Cache
If you are using a Centralized Configuration Server (CCS), ensure that the Cache Backup Directory property is set correctly from the Cache Settings page from any of the s in the cluster. Ensure the availability of the good copy of the cache in this folder on all mid tiers. For more information, see Backing-up-the-mid-tier-cache.
- If you are using a CCS, copy the ccs.properties file in the midTierInstallationDirectory/WEB-INF/classes folder from other mid tier.
To add the extra (n+1)
- Start the n+1 :
- Start the to connect to the CCS server by using ccs.properties and refresh the local copy of config.properties. This action also copies the cache backup from the backup directory and starts using it automatically.
- After the n+1 is started, verify the cache folder located in the midTierInstallationDirectory/cache folder of the n+1 . The size of this cache folder must be the same as the good cache copy stored in the /opt/Preload_Cache folder.
- Ensure that the n+1 is added as a node to the appropriate f5 load balancer pool.
Otherwise, f5 does not balance the load across the n+1 . Verify whether the newly added mid tier is serving requests by:
- Looking at the status of the relevant node in f5 load balancer
- Reviewing the number of views generated in the Cache Advanced page of the newly added (for example, http://localhost:portnumber/arsys/shared/config/config_cache_adv.jsp)
The newly added mid tier can now handle all requests seamlessly without causing delays and performance issues.
The Cluster ID
A cluster ID is a unique identification given to a cluster operating in the Remedy as a service environment. All s within a cluster share a unique cluster ID. When you configure the CCS settings for each , ensure that the mid tiers within a cluster are configured with the same cluster ID. Having a cluster identified by a unique cluster ID is essential because the CCS can notify all the s within a cluster about changes to the global properties. For more information about configuring the cluster ID for each , see Configuring-the-AR-System-server-as-a-Centralized-Configuration-server.
You must decide a unique cluster ID for each cluster. For example, for Cluster1, you can add Cluster1 as the cluster ID; for Cluster2, add Cluster2; and so on.
The following diagram illustrates the use of a cluster ID.
Best practices for configuring a cluster
To improve application scalability, we recommend that you increase the File Descriptor limit in mid tiers running in a cluster. If you have a cluster of 2 s with 12 tenants, which is serving 3600 users, set the File Descriptor limit to 35000. (300 users is considered an average load per tenant.)
To set the File Descriptor limit, perform the following steps:
- Edit the /etc/security/limits.conf file and add the following line:
Edit the /etc/bashrc file and set the unlimit value as follows:
if [ "$EUID" = "0" ]; then
ulimit -n 35000
fi
- Edit the /etc/sysctl.conf file and increase the File Descriptor limit for the system as follows:
fs.file-max = 210000
Ensure that you set the system-wide limit of File Descriptors to approximately 6 times that of the per-process limit set in the limits.conf file.
Refer to the following table for guidance on deciding the File Descriptor limit:
S No | Component | Variable | Description |
---|---|---|---|
1 | Tomcat connections | maxThreads setting in Tomcat HTTP Connector | maxThreads defines the maximum connections used by Tomcat when BIO Connector is used. |
2 | Tomcat NIO Receivers | Default setting is 6 | NIO Receivers for Session Replications. |
3 | Tomcat and Class Loaders | Estimate is 1200 | Class loader for JAR files loaded by Tomcat. This includes JARs in distribution, configured Third-party JARs, and DVF plug-ins. |
4 | properties files | 1 + 1 * Number of Tenants | One for global configuration and one per tenant. |
5 | Cache files | 27 x 2 = 54 | 27 categories x 2 files per category. Data files are always open. Index files are read into Memory at startup and closed and later written to during shutdown. |
6 | Cache lock file | 1 | Single file for Cache lock status. |
7 | Java API RPC connections to | arsystem.pooling_max_connections_per_server from Configuration * Number of Tenants | Configured by arsystem.pooling_max_connections_per_server setting in the configuration. |
8 | Java API JMS Connections for CCS | 1 + 1 * Number of Tenants | One for ECCS and One per tenant. |
9 | Attachments and Reports | Estimate is 20% * Maximum Concurrent Users on the Mid-Tier Node | Attachment files and report files created by on disk. |
10 | SSO Agent - Files | 1 | The SSO agent uses a single configuration file, which is queried from the SSO server. |
11 | SSO Agent – Connections to SSO Server | maxThreads setting in Tomcat HTTP Connector | SSO agent opens a URL connection to the SSO server when it logs in a user or validates an SSO token with the SSO server. |
12 | Safety Factor | 25% * Sum of (#1 to #11) | |
Total | Sum of #1 to #12 rounded up to nearest 5000 on 6th higher side |