This documentation supports the 20.02 version of BMC CMDB.

To view an earlier version, select the version from the Product version menu.

Learning about reconciliation

When you have multiple service providers that bring in data about your company assets or configuration items (CIs), there might be instances where the data for the same asset is stored incorrectly as multiple items, and carrying different information. Use reconciliation to eliminate such duplicated data, retain only relevant information, and store asset data that is correct and complete.

The reconciliation process of BMC CMDB compares data from multiple data providers and aims to create a single complete and accurate production dataset. This production dataset with reliable data is also called the 'golden dataset'. This golden dataset becomes a source of reference for other applications such as ITSM, for various ITIL processes and activities. Use the golden dataset to make sure that other applications that consume this data always use a complete and unique record of the configuration items (CIs) that results in accurate monitoring and reporting of the company infrastructure.

You can choose several methods for starting a reconciliation job, including manual, scheduled, continuous jobs, API, or a run process workflow.

The reconciliation engine performs the following important reconciliation activities:

ActivityDescription
Identify

Identifies CIs that are the same entity in two or more datasets.

Merge

Merges CI attributes from a source dataset to a production dataset to create the most comprehensive information in a single configuration item (CI).


Reconciliation is also used for the following activities:

ActivityDescription
Compare

Compares instances in two datasets and produces a report. 

CopyCopies instances from one dataset to another.
Delete

Deletes instances from one or more datasets. 
This activity does not delete the dataset itself. 

PurgeDeletes instances that have been marked as deleted from one or more datasets.
ExecuteExecutes multiple reconciliation jobs in a sequence.

The following image represents a high-level overview of a reconciliation job:


Structure of a reconciliation job

The reconciliation job is a container for reconciliation activities, and each activity consists of different components. The primary activities are identification and merging. A reconciliation job can have one or more activities, each of which defines one or more datasets and rules for that activity. In addition, you can use a qualification set to restrict the instances participating in a reconciliation activity.

Jobs can use standard or customized rules. Standard rules use defaults for identify and merge activities and automate the creation of reconciliation jobs. You can also create custom jobs that include different activities.


Identification activities to match instances



You can set an identification rule that the names of two different CIs from different datasets should be equal to Computer_1. When the rule finds a match, those instances are tagged with the same reconciliation ID. The reconciliation ID from the target dataset is copied into the source dataset.

These two CIs are considered as different instances of the same item when they have the same reconciliation ID. After CIs are recognized as different instances of the same item, they are now ready for merging based on which dataset is considered to have the most reliable information.

In another example, a rule intended to identify computer system instances might specify that the IP addresses of all be equal. When the rules find a match, it tags the matching instances with the same reconciliation identity.

You can also manually identify instances in an Identify activity.

Note

An instance must be identified before it can be compared or merged.


Merge activities to merge datasets into a reconciled target dataset

Consider a merge operation involving data from three different datasets. Each attribute from each dataset is given a different precedence value based on how reliable that dataset is considered. The higher the value, the higher the priority that attribute from that dataset has over the others. Finally the data that is added to the production dataset has the most reliable data merged from all sources. This data is the production data that other applications can access for various ITIL processes and activities.


Was this page helpful? Yes No Submitting... Thank you

Comments