This documentation applies to the 8.1 version of Remedy ITSM Deployment, which is in "End of Version Support." You will not be able to leave comments.

To view the latest version, select the version from the Product version menu.

Cleaning up normalized CI relationships

The following topics are provided:

If you enabled the Impact Normalization Feature option from the Normalization Engine console before you upgraded to BMC Atrium Core 8.1, relationship names for all CIs are normalized following the old best-practice relationship rules and impact relationships are generated following the old best-practice impact normalization rules.

The datamig utility cleans up the CIs in the staging datasets for which relationships names are normalized following the old best-practice relationship rules and impact relationships are generated following the old best-practice impact normalization rules.

Note

After upgrading to BMC Atrium Core 8.1, if you enable the Impact Normalization Feature option from the Normalization Engine console, relationship names for all CIs are normalized following the 131 new best-practice relationship rules and impact relationships are generated following the 21 new best-practice impact normalization rules. 

Changing the datamig file to match your environment

Before you run the datamig utility, you must change the datamig file to match your environment.

  1. From the bin folder that resides under the cmdb\sdk folder under the BMC Atrium Core home directory, open the datamig.bat file (Windows) or the datamig.sh file (UNIX) in a text editor.
  2. Set the ATRIUM_CORE, AR_SERVER_NAME, AR_SERVER_PORT, JAVA_HOME environment variables to reflect the path to the BMC Atrium Core installation directory, the AR System server name, AR System port, and the Java home path respectively.
  3. Save your changes.

Running the datamig utility

When you run the datamig utility, the Normalization Engine creates a job called BMC_ResetAiData_<DataSetName>_stage0. This job runs till all the CIs are cleaned up. You can choose to ignore this job. Alternatively, you can delete it after the clean-up.

If you choose to upgrade to BMC Atrium core in an AR System server group environment, run the datamig utility on the primary server only. When you are prompted to run the datamig utility on the secondary and the subsequent tertiary members of the server group, enter N.

To run the datamig utility on Windows

  1. For each dataset for which you had enabled the Impact Normalization Feature option, do the following:
    • Run the following command: datamig.bat -dataset <DatasetName>
      For example, if the name of your staging dataset is BMC.ADDM, run the following command:datamig.bat -dataset BMC.ADDM.
    • When prompted, enter valid values for AR Server username and password.
    • Wait for the job to finish.
  2. Check the results, as explained in Taking action on the results of the datamig utility.

To run the datamig utility on UNIX

  1. Run the following commands:
    chmod 555 datamig.sh
  2. For each dataset for which you had enabled the Impact Normalization Feature option, do the following:
    • Run the following command: datamig.sh -dataset <DatasetName>
      For example, if the name of your staging dataset is BMC.ADDM, run the following command:datamig.sh -dataset BMC.ADDM.
    • When prompted, enter valid values for AR Server username and password.
    • Wait for the job to finish.
  3. Check the results, as explained in Taking action on the results of the datamig utility.

Taking action on the results of the datamig utility

After running the datamig utility, three files are created at the following location:

  • (Windows) C:\Windows\temp
  • (UNIX) /tmp
    These files contain the results of the datamig utility, as explained in the following table.

    File nameDescriptionPossible action
    fixedImpacts_MM_DD_YYYY_HH_MM.csvContains the impact relationships that are reset.No action is needed on the results in this file.
    discrepancy_MM_DD_YYYY_HH_MM.csvContains rules that either do not match the qualification, or have modified impact attribute values.Leave the impact relationships as is.
  • Reset the impact relationships for selected CIs.

    NoRules_MM_DD_YYYY_HH_MM.csv

    Contains the impact relationships for which no rules are defined.

    Create new rules to define the impact relationships. This allows the Normalization Engine to set impact relationships on similar CIs.

  • Create no new rules.

    Note

    MM_DD_YYYY_HH_MM represents the date and the time in hours and minutes when the files are created. For example, after you run the datamig utility, the fixedImpacts_MM_DD_YYYY_HH_MM.csv file that is created might look like this: fixedImpacts_07_25_2011_03_43.csv.

Resetting the impact relationships

You can reset the impact relationships that have not been reset by the datamig utility. These impact relationships are contained in the discrepancy_MM_DD_YYYY_HH_MM.csv file.

  1. From the discrepancy_MM_DD_YYYY_HH_MM.csv file, find the impact relationships that you want to reset, and change the corresponding Overwrite column entry to Y.
  2. Save the file in CSV format.
  3. Run the following command:
    • (Windows) datamig.bat -resetI <filename>
    • (UNIX) datamig.sh -resetI <filename>
      For example, if you are using a UNIX computer, and you want to remove the impact relationships from the discrepancy_MM_DD_YYYY_HH_MM.csv file, run the following command:
      datamig.sh -resetI discrepancy_MM_DD_YYYY_HH_MM.csv

Cleaning up the old best-practice impact normalization rules

After you run the datamig utility for all staging datasets, clean up the old best-practice impact normalization rules.

Note

If you run the datamig utility for all the staging datasets at one time, you can expect some CPU consumption. A safer approach is to run the datamig utility for one staging dataset at a time. However, you must run this utility for each staging dataset before you delete the old best-practice impact normalization rules.

  • To clean up the old best-practice impact normalization rules, run the following command:
    • (Windows) datamig.bat -clean
    • (UNIX) datamig.sh -clean

Writing data to the production dataset

After you clean up data in the staging datasets, you must write the cleaned up data to the production dataset (such asBMC.ASSET). BMC Atrium Core gives access only to the Reconciliation Engine to write to the production dataset. To write the data to the production dataset, create a standard Identification and Merge job or run an existing Identification and Merge job. For more information, see  Creating a standard reconciliation job  in BMC Atrium Core documentation.

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Comments