This documentation supports the 19.02 version of Remedy with Smart IT.

To view the latest version, select the version from the Product version menu.

Installing Smart IT in a server group

This topic describes the process to install Smart IT in a cluster. Other options are described in Performing the Smart IT installation.

Before you begin

Complete the steps described in Preparing for installation of Smart IT. In addition to steps that are required for all installations, some steps are required for high availability.


To install Smart IT in server group

If you are planning to use the chat features of Smart IT and have installed Openfire in server group, specify the Use external installation of Openfire option during installation. See Smart IT installation worksheets.

  1. Install Smart IT on the primary server. 
    To use the GUI wizard, see, Installing Smart IT on a single server. To use the silent installer, see Installing Smart IT using silent mode.
    Specify the In a New Cluster option during installation. See Smart IT installation worksheets.

  2. Install Smart IT on the secondary server. 
    Specify the In an Existing Cluster option during installation. See Smart IT installation worksheets.

    Note

    Perform this step on each Smart IT server in the cluster.

  3. Start SmartIT Tomcat service for all Smart IT servers in the cluster.

For information on the configuring load balancer for Openfire, see Configuring load balancer for Openfire.


To create a highly available Tomcat cluster

To create a highly available Tomcat cluster

  1. Complete the steps mentioned in the 'To install Smart IT in server group' section, in this topic. 
  2. Once you have all your Smart IT nodes up and running you must setup each tomcat to share the session cookies. To do that you must modify the following files in every Tomcat/Smart IT:
    1. smartit.xml located at TomcatInstallationFolder\conf\Catalina\localhost.  Inside this file add the parameter distributable="true to the Context tag like this:

      <Context docBase="C:\PROGRA~1\BMCSOF~1\Smart_IT\Smart_IT\smartit"          override="true"          reloadable="false" distributable="true">
      1. Locate the tag <Manager pathname=""/> and replace it for the following:

        <Manager className="org.apache.catalina.ha.session.DeltaManager"    expireSessionsOnShutdown="false"    notifyListenersOnReplication="true" />
      2. Save the file.

    2. server.xml located at TomcatInstallationFolder\conf of all the tomcats. Locate the following tag <Engine name="Catalina" defaultHost="localhost">  add an identifier to that tomcat so it has a unique name.
      <Engine name="Catalina" defaultHost="localhost" jvmRoute="Node1">
      1. Right after the Engine tag modified in previous step paste the following cluster setup. Take the following considerations: 
        • In the Receiver tag you will use the local IP Address of the server you are using

        • You can add as many <Member> tags as nodes you have just remember to modify the IP Address so this tomcat can talk with the remote tomcats, each IP Address should have a different port than the one used in the <Receiver> tag and a different UniqueID.

        • If you only are setting up a two node cluster then you only need one Member, the second member on the code below can be removed

        1. In each tomcat the local IP becomes the receiver and the remote tomcats become the Members

          <Cluster         channelSendOptions="8"         channelStartOptions="3"         className="org.apache.catalina.ha.tcp.SimpleTcpCluster">     <Manager         className="org.apache.catalina.ha.session.DeltaManager"         expireSessionsOnShutdown="false"         notifyListenersOnReplication="true"     />     <Channel className="org.apache.catalina.tribes.group.GroupChannel">         <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">             <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />         </Sender>      <Receiver             address="10.1.1.1"            autoBind="0"             className="org.apache.catalina.tribes.transport.nio.NioReceiver"             maxThreads="6"             port="4100"            selectorTimeout="5000"         />      <!-- <Interceptor className="com.dm.tomcat.interceptor.DisableMulticastInterceptor" /> -->         <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor" staticOnly="true"/>         <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector" />         <Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">             <Member                 className="org.apache.catalina.tribes.membership.StaticMember"                 port="4101"                 host="10.1.1.2"                 uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1}"            />                 <Member                 className="org.apache.catalina.tribes.membership.StaticMember"                 port="4102"                 host="10.1.1.3"                 uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2}"            />         </Interceptor>         <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>       </Channel>      <Valve         className="org.apache.catalina.ha.tcp.ReplicationValve"         filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"     />     <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener" />   </Cluster>
  3. Once you have this configured restart your Tomcat so all the nodes start talking to each other. 
  4. Configure your load balancer in front of the cluster so in case of failures sends the traffic to the remaining nodes alive otherwise the cluster won't work as expected.

  5. Configure your load balancer in front of the cluster so when there's no failure follow the recommendations like use of sticky sessions and timeout greater than SmartIT session time out.

  6. Once you are done, in case one of your tomcat servers fail your users won't be logged out of their sessions, instead they will keep working on the remaining nodes. This also may come handy applying patches or updating your servers without causing a service interruption for your users.

For more information on these steps, see KA #000173915


Was this page helpful? Yes No Submitting... Thank you

Comments