Cleaning up normalized CI relationships
Perform the steps provided in this topic if you are upgrading from versions older than 8.1.2.
The topic provides the following information:
If you enabled the Impact Normalization Feature option from the Normalization Engine console before you upgraded BMC Atrium Core, relationship names for all CIs are normalized following the old best-practice relationship rules and impact relationships are generated following the old best-practice impact normalization rules.
The datamig
utility cleans up the CIs in the staging datasets for which relationships names are normalized following the old best-practice relationship rules and impact relationships are generated following the old best-practice impact normalization rules.
Note
After upgrading BMC Atrium Core, if you enable the Impact Normalization Feature option from the Normalization Engine console, relationship names for all CIs are normalized following the 131 new best-practice relationship rules and impact relationships are generated following the 21 new best-practice impact normalization rules.
Changing the datamig file to match your environment
Before you run the datamig
utility, you must change the datamig file to match your environment.
- From the bin folder that resides under the cmdb\sdk folder under the BMC Atrium Core home directory, open the datamig.cmd file (Windows) or the datamig.sh file (UNIX) in a text editor.
Ensure that the relevant datamig (.cmd or .sh, depending on your operating system) file contains the correct jar file name. For example, if you are updating the datamig.cmd file in 19.02, change the jar file name from arapi91_build001.jar to arapi91_build006.jar in the datamig.cmd file.
The jar file is present in the cmdb\sdk\bin folder, in the format arapi91_build<number>.jar. You can verify the build number before updating the datamig file.
Set the ATRIUM_CORE, AR_SERVER_NAME, AR_SERVER_PORT, JAVA_HOME environment variables to reflect the path to the BMC Atrium Core installation directory, the AR System server name, AR System port, and the Java home path respectively.
Editing the datamig file in Windows environment - sample file- Save your changes.
Running the datamig
utility
When you run the datamig
utility, the Normalization Engine creates a job called BMC_ResetAiData_<DataSetName>_stage0. This job runs in the background until all the CIs are cleaned up. After the data is reset and the CIs are cleaned up, you can delete the job.
If you choose to upgrade to BMC Atrium core in an AR System server group environment, run the datamig
utility on the primary server only. When you are prompted to run the datamig
utility on the secondary and the subsequent tertiary members of the server group, enter N.
To run the datamig
utility on Windows
- For each dataset for which you had enabled the Impact Normalization Feature option, perform the following steps:
- Run the following command:
datamig.cmd -dataset <DatasetName>
For example, if the name of your staging dataset is BMC.ADDM, run the following command:datamig.cmd -dataset BMC.ADDM
. - When prompted, enter valid values for the AR Server username and password.
- Wait for the job to finish.
- Run the following command:
- Check the results, as explained in Taking action on the results of the datamig utility.
To run the datamig utility on UNIX
- Run the following commands:
chmod 555 datamig.sh
- For each dataset for which you had enabled the Impact Normalization Feature option, perform the following steps:
- Run the following command:
datamig.sh -dataset <DatasetName>
For example, if the name of your staging dataset is BMC.ADDM, run the following command:datamig.sh -dataset BMC.ADDM
. - When prompted, enter valid values for the AR Server username and password.
- Wait for the job to finish.
- Run the following command:
- Check the results, as explained in Taking action on the results of the datamig utility.
Taking action on the results of the datamig
utility
After running the datamig
utility, three files are created at the following location:
- (Windows)C:\<CurrentUserName>\AppData\Local\Temp
- (UNIX) /tmp
These files contain the results of the datamig
utility, as explained in the following table.
File name | Description | Possible action |
---|---|---|
fixedImpacts_MM_DD_YYYY_HH_MM.csv | Contains the impact relationships that are reset. | No action is needed on the results in this file. |
discrepancy_MM_DD_YYYY_HH_MM.csv | Contains rules that either do not match the qualification, or have modified impact attribute values. | Leave the impact relationships as is. |
- Reset the impact relationships for selected CIs.
File name | Description | Possible action |
---|---|---|
NoRules_MM_DD_YYYY_HH_MM.csv | Contains the impact relationships for which no rules are defined. | Create new rules to define the impact relationships. This allows the Normalization Engine to set impact relationships on similar CIs. |
Create no new rules.
Note
MM_DD_YYYY_HH_MM represents the date and the time in hours and minutes when the files are created. For example, after you run the
datamig
utility, the fixedImpacts_MM_DD_YYYY_HH_MM.csv file that is created might look like: fixedImpacts_07_25_2011_03_43.csv.
Resetting the impact relationships
You can reset the impact relationships that have not been reset by the datamig
utility. These impact relationships are contained in the discrepancy_MM_DD_YYYY_HH_MM.csv file.
- From the discrepancy_MM_DD_YYYY_HH_MM.csv file, find the impact relationships that you want to reset, and change the corresponding Overwrite column entry to Y.
- Save the file in CSV format.
- Run the following command:
- (Windows)
datamig.cmd -resetI <filename>
- (UNIX)
datamig.sh -resetI <filename>
For example, if you are using a UNIX computer, and you want to remove the impact relationships from the discrepancy_MM_DD_YYYY_HH_MM.csv file, run the following command:datamig.sh -resetI discrepancy_MM_DD_YYYY_HH_MM.csv
- (Windows)
Cleaning up the old best-practice impact normalization rules
After you run the datamig
utility for all staging datasets, clean up the old best-practice impact normalization rules.
Note
If you run the datamig
utility for all the staging datasets at one time, you can expect some CPU consumption. A safer approach is to run the datamig
utility for one staging dataset at a time. However, you must run this utility for each staging dataset before you delete the old best-practice impact normalization rules.
To clean up the old best-practice impact normalization rules, run the following command:
- (Windows)
datamig.cmd -clean
- (UNIX)
datamig.sh -clean
Writing data to the production dataset
After you clean up data in the staging datasets, you must write the cleaned-up data to the production dataset (such as BMC.ASSET). BMC Atrium Core gives access only to the Reconciliation Engine to write to the production dataset. To write the data to the production dataset, create a standard Identification and Merge job or run an existing Identification and Merge job. For more information, see Creating a standard reconciliation job in BMC Atrium Core documentation.
Comments
Log in or register to comment.