Unsupported content

 

This version of the product is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Converting BMC Discovery standalone machines to a cluster

If you have multiple standalone machines on which you installed BMC Discovery, and each of these installations synchronize with the CMDB, you might want to convert them to a cluster. 

Background

Where multiple appliances synchronize to the CMDB, each appliance uses its own dataset, and the CMDB reconciliation rules merge the datasets into a golden dataset called BMC.ASSET.

Multiple appliances can be replaced with a single cluster. When you create a cluster, only one machine can be in the non-default state, that is, have any changed configuration or any data. You can upgrade one of the machines and retain its data, the others can be upgraded, but must be reset when you create the cluster. The data from the machines that are reset is removed and must be rescanned. The algorithm that identifies hosts (actually root nodes, which are Hosts, Network Devices, SNMP Managed Devices, Printers, Storage Devices, and Management Controllers) may come up with a different identifier or key each time a root node is newly created in the datastore. However, CMDB sync relies on the keys of the root nodes remaining the same; if they are not the same, then CMDB sync may create duplicate CIs.

A similar situation arises when the tw_model_wipe utility is used in a BMC Discovery system that synchronizes data to CMDB.

When a host is scanned successfully but subsequently cannot be reached, the host node is aged and deleted. On its deletion, an event is sent to CMDB and the corresponding CI is deleted. If tw_model_wipe is used, or the machine's configuration is reset on joining a cluster, then the event is never sent. Consequently the CI representing the aging host is never removed.

In BMC Discovery a root node key information export utility is provided which enables you to extract information and the key of root nodes from the appliances that you intend to reset. You can then upload the information to the appliance that will remain in the non-default state during cluster creation. The information is stored in the datastore as RootNodeKeyInfo nodes.

When the cluster scans the estate, any potentially new root node is compared against the RootNodeKeyInfo nodes, and if a match is found for the discovery target, a new node is created using the key from the RootNodeKeyInfo node and the RootNodeKeyInfo node is deleted. If no match is found in the RootNodeKeyInfo nodes, then a new node is created with a new unique key.

To move node keys when creating a cluster

  1. For each appliance that you intend to reset:
    1. Run the utility on each appliance to create the XML key information file. Use the appliance name as part of the filename. For example, to export appliance01 keys:

      [tideway@appliance01 ~]$ tw_root_node_key_export appliance01_key_info.xml 
    2. Determine the number of root node keys in the XML file. Enter:

      [tideway@appliance01 ~]$ grep " <root-node-key-info " appliance01_key_info.xml | wc -l

      Make a note of the number. 

  2. On the appliance that will remain in the non-default state during cluster creation (the coordinator appliance):

    1. Determine the number of keys in the RootNodeKeyInfo and make a note of it. Typically, you start with zero RootNodeKeyInfo nodes. Enter:

      [tideway@appliance100 ~]$ tw_query --no-headings "search RootNodeKeyInfo" | wc -l
    2. Copy the XML files from the other appliances onto the coordinator appliance.

    3. Start screen. Enter:

      [tideway@appliance100 ~]$ screen
    4. Run the import utility. For example, to import appliance01 and appliance02 keys into appliance100:

      [tideway@appliance100 ~]$ tw_root_node_key_import appliance01_key_info.xml appliance02_key_info.xml 
    5. To verify that the import was successful, determine the number of keys in the RootNodeKeyInfo and make a note of it. Enter:

      [tideway@appliance100 ~]$ tw_query --no-headings "search RootNodeKeyInfo" | wc -l

      If the number of keys found in this step is close to the number of keys (within 2 keys) in the import files (plus the number of existing nodes, if any) you can conclude that the import is successful and proceed to the next step. 

      If there is a mismatch in the number of keys found, the import may have been unsuccessful and you should open a case with BMC Support.

    6. Create a cluster adding the other appliances as members.

Import warnings

When you run the node key importer, you may encounter warnings. For example:

  • If the appliances had overlapping scan ranges.
  • A host key was contained in the import data and you had already scanned the host and generated a new key.
    These warnings are also logged in $TIDEWAY/log/tw_root_node_key_import.log. For example:
test.xml: Imported 2 root node keys.

Information for 4 root nodes was not imported because they match the following existing nodes:

    sw-london-01.bmc.com (kind: NetworkDevice, key: GIvkT/7rwRw62h3crLcKFg==)
    London (kind: SNMPManagedDevice, key: V1U+clotkQrx3GAaMU9u5Q==)
    BMC-Main-Printer (kind: Printer, key: BT2Veaa2IQFuelbAVUs6Xw==)
    localhost (kind: Host, key: dEFkpJtnDbErqt5iOWImlg==)

Imported information for 1 root node matches an existing root node information:

    Existing node: mldev (root_node_kind: Host, key: 12345)
    Imported node: mldev (root_node_kind: Host, key: 1234) 

In this run, six RootNodeKeyInfo nodes were considered. The first two RootNodeKeyInfo nodes were imported, no additional information is logged on this. The next four were not imported because they match nodes that are already in the datastore.

There is also a warning that one of the imported RootNodeKeyInfo nodes matches another RootNodeKeyInfo node that has previously been imported. However, it is not identical; the key is different. The RootNodeKeyInfo node is imported as it is unclear whether they refer to the same host. Two appliances may have had overlapping scan ranges and both scanned the same host.

Unused imported keys

You can view a report on unused imported keys. In the Discovery Deployment section of the main Reports page, the Root node key information which has not been used provides this information.

To preserve IDs when using tw_model_wipe

The tw_model_wipe utility deletes the contents of the datastore. See the documentation before attempting to use it. On the BMC Discovery appliance from which you plan to delete contents of the datastore:

  1. Run the utility to create the XML key information file. Use the appliance name as part of the filename. For example:

    [tideway@appliance01 ~]$ tw_root_node_key_export appliance01_key_info.xml 
  2. Determine the number of root node keys in the XML file. Enter:

    [tideway@appliance01 ~]$ grep " <root-node-key-info " appliance01_key_info.xml | wc -l

    Make a note of the number.

  3. Run the tw_model_wipe utility. For example:

    [tideway@appliance01 ~]$ tw_model_wipe --force
  4. Determine the number of keys in the RootNodeKeyInfo and make a note of it. The number should be zero after tw_model_wipe. Enter:

    [tideway@appliance01 ~]$ tw_query --no-headings "search RootNodeKeyInfo" | wc -l
  5. Start screen. Enter:

    [tideway@appliance01 ~]$ screen
  6. Run the import utility.

    [tideway@appliance01 ~]$ tw_root_node_key_import appliance01_key_info.xml
  7. To verify that the import was successful, determine the number of keys in the RootNodeKeyInfo and make a note of it. Enter:

    [tideway@appliance01 ~]$ tw_query --no-headings "search RootNodeKeyInfo" | wc -l

    If the number of keys found in this step is close to the number of keys (within 2 keys) in the import files (plus the number of existing nodes, if any) you can conclude that the import is successful.

    If there is a mismatch in the number of keys found, the import may have been unsuccessful and you should open a case with BMC Support.

Was this page helpful? Yes No Submitting... Thank you

Comments