Configuring high availability replication


MainView Middleware Administrator (MVMA) supports high availability through the MongoDB® "Replica Set" feature. As such, configuring high availability for MVMA entails configuring MongoDB®. See the Configuration section (below) for further information.

Note

If you are upgrading from a previous version of MVMA to MVMA 9.0.00, you must migrate the MVMA database to use the MongoDB WiredTiger storage engine.

Overview and concepts

A MVMA installation ships with two service components; a java-based runtime that implements the MVMA product functionality and a MongoDB® service that maintains the MVMA data.

In the default configuration, the MVMA service controls the lifecycle of the MongoDB® service instance: when MVMA starts, it starts the MongoDB® service and when MVMA stops, it stops the MongoDB® service.

In a high availability configuration, the MongoDB® service should be allowed to run independently of the MVMA service in order to increase its potential uptime to allow it to keep the data current and participate in the data cluster. Doing this allows Product Administrators to stop and start the MVMA service without interfering with clustering.

A MVMA cluster consists of one primary instance and two or more secondary instances. Any client can connect to any MVMA service, but the data from any given MVMA service will be read from or written to the current primary MongoDB® instance.

When a MongoDB® service instance fails, if it is the primary instance, a new primary instance will be elected from the set of existing instances. When a failing MongoDB® service instance restarts, it will rejoin the cluster and be available as a secondary node.

A MVMA cluster must have a minimum of three nodes running on three separate hosts in order to provide replication. The reason for three is to ensure that it is easy for the cluster to determine a primary MongoDB® instance. If the number of MongoDB® instances ever reaches one in a replica set, the database will be set to read-only and no writes will be able to occur. So it is important to have enough instances to avoid this. Three is the lowest number of instances to be safe, but more is always better.

Note

All MongoDB® replica set members must run on the same operating system and the same version of the operating system (but on separate hosts).

Configuration

The configuration of the cluster consists of decoupling the MongoDB® service lifecycle from the MVMA service and configuring a MongoDB® replica set.

Decoupling the MVMA service and MongoDB® lifecycle

To ensure the MVMA service and the MongoDB® service have separate lifecycles:

  1. Ensure the MVMA service is not running (this includes the MongoDB® service instance).
  2. Edit ${install directory}/configuration/services/com.bmc.mmadmin.service.cache.properties and change the "mongoControl" property from "true" to "false".

Note

On Windows platforms, the MongoDB® service instance can be installed as a Windows Service Control Manager managed service by issuing the following command at a command prompt (requires administrative privileges):

${install directory}/bin/installMongodService.bat

This will install a service called MainView Middleware Administrator Mongod Service in the Services management applet. You may need to assign a user id and password to the service via the Service management applet. Note that the MongoDB® service should be started or stopped through the Service applet; there is no command interface to start or stop the MongoDB® service but only for installing or removing it.

Note

If you have an existing installation and you are changing to a High Availability configuration, you must copy the installdir/data directory to all cluster hosts at this point.

Configuring the MongoDB® replica set

To setup the MainView Middleware Administrator MongoDB® replica set:

  1. Edit ${install directory}/etc/data.conf and add a line containing "replSet=bmmadmin". This must be done for each installation of a replica set member. Note that a configuration set for the etc/data.conf file should be added; remove or comment out the setting bind_ip=127.0.0.1.
    On Linux platforms, add the line "fork=true" to ${install directory}/etc/data.conf so that the MongoDB® service runs in the background when started.
  2. Start the MongoDB® service by executing the following command on each cluster node:

    $ cd ${install directory}
    $ ./mongodb/mongodb-linux-x86_64-x.x.x/bin/mongod –f etc/data.conf"
  3. Start the mongo® shell (${install directory}/mongodb-linux-x86_64-x.x.x/bin/mongo) on one of the replica set hosts. The mongo® shell and the following commands (in steps 4 and 5) should only be run on one of the server machines.
  4. At the shell prompt, execute rs.initiate() and then execute rs.add("hostname") for each host that is to be configured in the replica set. If non-default ports are being used, "hostname:port" may be used in the rs.add() call.
  5. Verify the configuration using rs.status(). This will show an entry for each host in the replica set and you will be able to see the state of the host in the replica set.
  6. Start the MVMA services on each host.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*