Database Clustering
Database Clustering is a mechanism that allows MainView Console Management configuration information to be replicated across two or more computers, thereby sharing the configuration and keeping it in sync. Each computer in the cluster has a complete replica of the configuration database. Changes to the configuration are automatically replicated to all other computers in the cluster automatically.
In addition to configuration data replication, CCS and MVCA servers may have a different server specified for where they should run. This allows administrators to configure both a primary CCS server and the alternate CCS server in one place, and then specify the 'startup host' so that the primary runs on one server, and the alternate runs on a different server.
User management is also much easier using a cluster since users can be created on one server and that user's information is replicated to all other servers. If the user needs to connect to a CCS server running on a different server, only one password is required. Also, when a user changes his password, that password change is replicated to all other servers.
Before a cluster is created, you should ensure that saved configurations are created for each server that will become part of the cluster. If creating the cluster fails for any reason, you may revert back to the previous configuration by restoring the saved configuration.
Setting up Secure Shell
Many cluster functions are performed by using the Secure Shell (SSH), an integral part of the Linux operating system. The most secure way to set up SSH is to create public/private key pairs and to copy the public keys to all other members of the cluster. MVCM clustering requires that:
- The SSH service be enabled on all cluster members.
- Root login using SSH be permitted ('PermitRootLogin yes' in /etc/ssh/sshd_config)
- Public key authentication be enabled ('PubkeyAuthentication yes' in /etc/ssh/sshd_config)
- Public/private key pairs be created on all cluster members and public keys exchanged.
The Linux commands used to manually create and exchange keys are described below; note that a number of MVCM shell scripts are available to assist in the process.
Create the public/private key pair
Create the key pair logging into each server and use the command:
A sample session creating the key pair will look like:
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
0b:6c:f5:af:77:5d:3f:fc:48:73:cb:15:c4:17:22:0e root@hosta
The MVCM helper script /usr/iocinst/bin/ssh-keygen.sh may also be used to generate a 2048-bit RSA keypair:
Generating SSH keypair for root@lnx01aa.bmc.com
*** when prompted for file name just hit Enter ***
Enter file in which to save the key (/root/.ssh/id_rsa):
...keypair created
Sharing the public keys
The public key must be copied to all other members of the cluster and added to the authorized_keys file.
Enter root's password for hostb
Enter root's password for hostb
The MVCM helper script /usr/iocinst/bin/ssh-getkey.sh may also be used to retrieve the public key of the host you specify on the command line and add it to this servers list of authorized keys:
Getting public key for root@mvcmdemo.bmc.com
*** enter remote password if prompted ***
Warning: Permanently added 'mvcmdemo.bmc.com,172.21.46.31' (RSA) to the list of known hosts.
root@mvcmdemo.bmc.com's password:
id_rsa.pub 100% 409 0.4KB/s 00:00
...retrieved mvcmdemo.pub
...saving old authorized_keys as authorized_keys_14.03.07@17:07
...public key installed!
After sharing the public key from hosta, you should be able to ssh from hosta to hostb without providing a password. Adding hosta's public key to hostb's authorized_keys list facilitates this. The may be tested by simply entering the command:
from hosta and verifying that you are now logged into hostb without having to enter a password.
Now that root can securely log in and copy files to hostb, the reverse must be done (create a key pair on hostb and copy it to hosta).
Creating a cluster
Several scripts are provided for managing members of a cluster. To create a cluster, first designate one server that will be the source of all configuration data. When partner computers are added to the cluster, their configuration data is permanently deleted.
On the computer designated as the first cluster member (hosta in this example), run the following command:
The mvcm_enable_cluster_master.sh script runs a series of tests to ensure that SSH is properly set up and that all cluster members have the same MVCM software level loaded. If the tests pass, database clustering is enabled on hosta.
After enabling clustering on the first member, run the command: on the remaining cluster members. The mvcm_enable_cluster_partner.sh script prepares the current server for clustering, copies the database from the host specified and restores it.
Working with a cluster
Once a cluster is established configuration changes made on one sever will be propagated to all other servers. Although each member of the cluster is treated equally, it is often easier to always go to a specific host to perform configuration changes.
When a member of the cluster is stopped for any reason it will automatically obtain a database backup from another member of the cluster and restore it before becoming an active member of the cluster.
Disabling a cluster
Disabling a cluster is accomplished by running the mvcm_disable_cluster.sh on each member of the cluster:
Since the SSH public keys are shared this may be done all from one server. For example, if you are logged into hosta you may use SSH do disable clustering on hostb:
ssh hostb /usr/iocinst/apps/db-derby/mvcm_disable_cluster.sh
Upgrading software
Before upgrading MVCM software the cluster must be disabled. The cluster may be re-enabled after performing the upgrade on all members of the cluster.