Reconciliation - Best Practices
CMDB's reconciliation engine is a powerful tool that has the primary purpose of processing, merging, and managing CIs from different sources into a single production (golden) dataset that is more accurate and represents the organization's environment. This data is then used with other ITSM applications.
Watch the following webinar recording (20:00) for an overview of reconciliation:
Configuring Reconciliation Engine
To configure your reconciliation engine correctly, see, System Configurations - Best Practices.
Creating a Reconciliation Job
As shown in the following image, when creating a reconciliation job, make sure to set the following options:
Reconciliation job settings
The following table gives information about the settings available in the Job Details section in a reconciliation job:
Setting | Value | Additional Information |
Job Name | Provide any meaningful name. Usually, this is described as the Source Dataset's job. | |
Description | Provide any meaningful description. It is important to describe any details possible here so that in the future the details can be easily referenced. | |
Active | ✅️ | This toggle determines if the schedule will execute or not. Turn this off for Ad-hoc jobs. |
Process Normalized CIs Only | ✅️ | If this is not set then the data integrity of your Production Dataset is immediately put into risk and all other ITSM applications may suffer as a result. |
Delete Files on Exit | ✅️ | The Delete Files on Exit option will delete 0 byte log files for recon jobs. |
Exclude Orphaned Weak CIs | ✅️ | The Exclude Orphaned Weak CIs option causes the Actions of the Recon job to ignore CIs that don't have relationships. |
Use Standard Rules for Dataset | ✅️ | The Use Standard Rules for Dataset option will enforce the default ruleset for the actions. If your CMDB is mature, then you might have custom Identification Rules. In this case only, unselect this option. |
Disable Progress Bar on UI During Job Execution | Enabling the progress bar impacts job performance as it consumes more resources to display the progress on the UI. By default, this option is selected for better job performance. | |
Retain All Job Run in History | Almost always this will be unselected and set to a number lower than 5. There are no scenarios where a Recon history record from the past will be relevant. |
Reconciliation job schedules
Each reconciliation job has a schedule. All schedules are set to run weekly. You can choose to run the job any number of days and at any time. If you need to run it multiple times a day, then you can create multiple schedules for each reconciliation job. This is advantageous for datasets that are predictable such as BMC Discovery.
The schedule will be different for all environments and considerations should be made based on your specific use case. The following table gives information about setting schedules according to the data that is being reconciled:
Schedule | Use Case |
---|---|
No Schedule, ad-hoc Only | Bulk data loads - Large number of records such as Company expansion or major system overhaul (See Below) |
Recurrence Only | Incremental data loads - Small number of records such as monthly shipment of new workstations |
Recurrence Only | Regular data loads - Data that comes from automated tools such as BMC Discovery. |
Continuous | CMDB Explorer modifications - Individual modifications made by users to update the CIs |
Use default Sandbox Job | Asset Management modifications - Individual Modifications made by users to update the Assets |
Continuous jobs are set to run at intervals of certain number of seconds. These jobs must be set to run at an interval of 120 seconds or higher. The default time interval is 1800 seconds (30 minutes). This isn't set per job, but rather globally, as a system value, and it is set for all datasets.
All other settings such as threads, memory allocation, and hardware specifications are provided based on the default value of 1800 seconds for continuous jobs. They are expected to be limited in their use in a BMC Helix CMDB environment.
The following points are some key considerations for scheduling reconciliation jobs:
- Never allow two reconciliation jobs to overlap with one another in the same source or target dataset.
It can overwrite data and potentially cause identification errors and cause duplicate CIs. - Never change the name of an out-of-the-box reconciliation job because other applications are set to refer to these default names to complete some of their tasks or workflows.
- Do not run reconciliation jobs during prime (heavy use) hours.
- Always test new jobs in a non-production environment with similar hardware, configurations, and data.
One-Time Bulk Data Loads
There is one exception to how to handle datasets in case of large single time imports of records. This import might be done in a scenario where you are doing your initial load of data for a new BMC Helix CMDB, or after a company merger or acquisition, where the consumption of a large dataset is required only once. For such scenarios, perform the following steps:
- Create a single Initial Dataset specifically for this load.
Do not load to this dataset again. - Load your data source to this Initial Dataset.
- Create a second Active Dataset to be set as the live dataset to be used later.
This dataset can be used to load additional data, if there are additions made later. - Create a single reconciliation job with a single Copy activity.
No schedule is needed. Run this job to copy all records from the Initial Dataset to the Active Dataset. - Now, you can treat the Active Dataset like any other dataset and create normalization and reconciliation jobs as per the company policy.
In such scenarios, since the source is only available once, following this method gives you a clean dataset consisting of the original data that can be used at a later time to compare and diagnose data integrity issues.
To learn more about creating reconciliation jobs, see Creating a Reconciliation Job.
Activities in reconciliation jobs
Depending on your requirement, you can add the following seven activities to a reconciliation job, out of which Identify, Merge, and Purge are critical activities and would be a part of almost every reconciliation job:
- Identify
- Merge
- Purge
- Execute
- Copy
- Compare
- Delete
Understanding the Identify activity
Identification is a process where two datasets are evaluated to find already existing CIs that are the same. This is almost always the first action in any Reconciliation Job. To understand the Identify process, here is a scenario where an Operating System record is found in three different data sources:
This record was imported by using a spreadsheet that Atrium Integrator collected. Since it has not been Reconciled yet, it does not have a ReconciliationId. In this example (see image below) only the Operating System was imported. It has no parent relationship to another CI, which usually would be a Computer System. It also does not have a TokenId (which is used by Discovery) and doesn't have a truly unique name or serial number.
When a Identify activity is run against this dataset, the following steps will occur:
- The CI is evaluated for it's Normalization Status.
- Since this was never normalized and approved, the record is skipped.
This record was created by a Automated Discovery product during a routine scan of the CI. Since it has not been Reconciled yet, it does not have a ReconciliationId. In this example (see image below) the Operating System CI was imported along with it's parent the Computer System CI. It also does not have a TokenId (which is used by Discovery) and doesn't have a truly unique name or serial number. Although the Computer System does have these values.
When a Identify Action is ran against this dataset, the following steps will occur:
- The CI is evaluated for it's Normalization Status.
Since it was approved, the system proceeds to the next step. - The CI is evaluated for a parent relationship.
In this case, it has a parent Computer System, which makes the Operating System CI a Weak CI. - The Identification Rule for the parent Computer System CI class are now evaluated, but it already has a Reconciliation ID, no further evaluation is needed of the parent CI.
- Reconciliation Engine now moves onto reconciling the children of the parent. In this scenario it will look at the Operating System Record.
- Reconciliation Engine then sees if there are any Operating System records in the production dataset with the same TokenId that are also related to the Computer System.
No records would return as TokenId is 0 and no existing relationship to the Computer System record exist. - Reconciliation Engine then sees if there are any Operating System Records that have the same Name and Serial Number.
This will also fail since there are no existing records. - After all rules have been evaluated a ReconciliationId is generated and written to the Operating System CI.
In this scenario, the Parent and relationship are unaffected and will be ready for the rest of the Reconciliation process.
This record was created by Asset Management users who manually created the CI. Since it has not been reconciled yet, it does not have a ReconciliationId. In this example, as shown in the following image, the Operating System CI was imported along with it's parent the Computer System CI. It also does not have a TokenId (which is used by BMC Discovery) and doesn't have a truly unique name or serial number. However, the Computer System does have these values.
When an Identify activity is run against this dataset, the following steps will occur:
- The CI is evaluated for it's Normalization Status.
Since it was approved, the system proceeds to the next step. - The CI is evaluated for a parent relationship.
In this case it has a parent Computer System, which makes the Operating System CI a Weak CI. - The Identification Rule for the parent Computer System CI class are now evaluated, the first Computer System Identification Rule is based on TokenId.
- This results in a search on the Production Dataset.
The existing Computer System record is found as TokenId produces a match. - The BMC.ASSET.SANDBOX Dataset's Computer System ReconciliationId value is now updated with the Production Dataset's Computer System ReconciliationId value.
- Reconciliation Engine now moves onto reconciling the children of the parent.
In this scenario, it will look at the Operating System Record. - Reconciliation Engine then sees if there are any Operating System records in the production dataset with the same TokenId that are also related to the Computer System.
No records would return as TokenId is 0 and no existing relationship to the Computer System record exists. - Reconciliation engine then sees if there are any Operating System Records that have the same Name and Serial Number.
This will also fail since there are no existing records. - After all rules have been evaluated a ReconciliationId is generated and written to the Operating System CI.
In this scenario, the Parent and relationship are unaffected and will be ready for the rest of the Reconciliation process.
This set of scenarios highlights a key reason why multiple reconciliation jobs should not be run at the same time.
Based on the examples, the following image represents the three scenarios:
To add a Identify activity to a reconciliation job
- In a reconciliation job creation screen, click the Add Activity button
- From the list, select Identify.
When creating an identification activity, choose from the following settings:
Setting | Value | Additional Information |
Name | Provide any meaningful name. Usually, this is described as the Source Dataset's job. | |
Namespace | ALL | |
Sequence | Set the order in which this activity will run. This sequence number can range from 0 to 1000. The lowest numbered activities are performed first. Tip: Identification activities should always be run before Merge or Purge activities. | |
Status | ✅️ | Enable or disable running this activity. Status indicates if this activity will be run as part of the reconciliation job. This setting is useful when you do not want to run an activity for a short period. For example, if an audit is being conducted you might not want a Purge job to run for the duration of the audit or as per requirement. You can enable it after the audit. |
Continue on Error | ✅️ | Select this option to continue running the job even with certain errors. This option should almost always be selected unless you are debugging a specific job. |
Type | IDENTIFY | Use this to verify the activity that you are configuring. This is a static field that reflects the type of activity selected in the Add Activity button. |
Source Dataset | Select the source dataset that will be identified against the production dataset. | |
Production Dataset | BMC.ASSET | Select the target dataset to identify CIs against the CIs in the source dataset. This should always point to either your production (golden) dataset or a staging dataset. |
Selected Dataset | View and confirm the source dataset and the identification rules of that dataset. | |
Generate IDs | ✅️ | Select this option to generate ReconciliationIDs for any non-existing records, without a ReconciliationID. |
Exclude Sub-Classes | ||
Qualification | ✅️ |
Identification Rules
To learn how to configure the identification rules, see Configuring-reconciliation-identification-rules.
Some key things to keep in mind when configuring identification rules:
- Add database indexes to attributes used in Identify rules.
Consult your database administrator to determine what indexes would help you. - Regularly review your identification rules to make sure that they are still appropriate for your environment, and spot-check instances to confirm that the CIs are being identified correctly.
- Create multiple sets of rules for the same class.
- Use a different ruleset for non-BMC Discovery applications including manual data entry.
- Many rules can be added per rule set.
The evaluation is done in sequential order. The more restrictive rule should be ranked first. Once an identification is made, it will not proceed to the additional rules. Lowest ranked rule is run first. - When adding many rules per rule set, ensure the execution order is unique for every rule.
Updating the identification rules
Use the Failed Identification Report as a tool to improve your identification rules and build your CMDB around your data structures and processes.
For example, if you have a Data Entry Clerk that constantly forgets to enter a serial number even though it is required to be populated by your company policy, the CMDB or ITSM administrator should create a workflow that makes the SerialNumber field a required field for new or existing records.
This workflow uses the power of BMC Helix CMDB and BMC Helix ITSM to enforce company policies.
Understanding the Merge activity
In the Merge activity in a reconciliation process, the data from a source dataset is merged into the production dataset where it either updates an existing CI or creates a new CI in the production dataset. Usually, an Identify activity will precede a Merge activity in a reconciliation job.
To understand the Merge process, here is a continuation of the same scenario where an Operating System record is found in three different data sources:
This CI record is imported by using a spreadsheet that Atrium Integrator collected. It failed to Identify with any other record so it doesn't have a ReconciliationId. In this example, as shown in the following image, only the Operating System record is imported.
When a Merge Action is run against this dataset, the following steps occur:
- The reconciliation engine looks for all records with a ReconciliationId.
- Since there isn't one for this record, nothing is done with this record.
This CI is created by an automated discovery product during a routine scan of the CI. It is identified and the reconciliation ID is populated from the value in the golden dataset. In this example, as shown in the following image, only the Operating System is imported.
When a Merge activity is run against this dataset, the following steps occur:
- The reconciliation engine will look for all records with a ReconciliationId,
- Because this CI contains a ReconciliationId, it will be pulled into the batch to be processed.
- The production dataset is evaluated to see if a CI record with that ReconciliationId exists.
If so, then it updates the record. If not, then it will create a new record. - Then, the relationships for the CI will be evaluated.
All parent and child CIs are collected in the source dataset. - Each parent and child CI is evaluated to determine if it exists in the database.
If yes, then the relationship is created, else it is not created.
This CI record is manually created by Asset Management users. It is Identified and the ReconciliationId is populated from the value in the golden dataset. In this example, as shown in the following image, only the Operating System is imported.
When a Merge activity is run against this dataset, the following steps occur:
- The reconciliation engine will look for all records with a ReconciliationId, since this CI contains a ReconciliationId, it will be pulled into the batch to be processed.
- The production dataset is evaluated to see if a record with that ReconciliationId exists.
If so, then it updates the record. If not, then it will create a new record. - Then the relationships for the CI will be evaluated.
All parent and child CIs are collected in the source dataset. - Each parent and child CI is evaluated to determine if it exists in the database. If, the relationship is created, else it is not created.
To add a Merge activity to a reconciliation job
- In a reconciliation job creation screen, click the Add Activity button.
- From the list, select Merge.
When creating a Merge activity, choose from the following settings:
Setting | Value | Additional Information |
Name | Provide any meaningful name. Usually, this is described as the Source Dataset's job. | |
Namespace | ALL | |
Sequence | Set the order in which this activity will run. This sequence number can range from 0 to 1000. The lowest numbered activities are performed first. Tip: Merge activities should always be ran after Identify activities. | |
Status | ✅️ | Enable or disable running this activity. Status indicates if this activity will be run as part of the reconciliation job. This setting is useful when you do not want to run an activity for a short period. For example, if an audit is being conducted you might not want a Purge job to run for the duration of the audit or as per requirement. You can enable it after the audit. |
Continue on Error | ✅️ | Select this option to continue running the job even with certain errors. This option should almost always be selected unless you are debugging a specific job. |
Type | MERGE | Use this to verify the activity that you are configuring. This is a static field that reflects the type of activity selected in the Add Activity button. |
Source Dataset | Select the source dataset that will be identified against the production dataset. | |
Production Dataset | BMC.ASSET | Select the target dataset to identify CIs against the CIs in the source dataset. This should always point to either your production (golden) dataset or a staging dataset. |
Selected Dataset | View and confirm the source dataset and the precedence values of that dataset. | |
Qualification: Use all Classes and Instances | ✅️ | |
Precedence Association Set | View and manage the precedence rules for the selected datasets. These precedence rules are applied for this Merge activity. | |
Merge Order | By Class in separate transactions | Set the order in which the CIs are merged. Select from the following options:
Tip: Each option has its own benefits, but most often for most use cases, selecting By Class in separate transactions will yield the best performance. |
Include Unchanged CIs | Specify the CIs that you want to merge into the target dataset. Selecting this option will force the Merge activity to evaluate and merge all CIs regardless of when they were last modified. This will cause the job to take much longer but in specific scenarios, it might correct some data issues. Usually this option should be cleared. |
Understanding Precedence Rules
Precedence rules apply when you are using a Merge activity when you select the Precedence Association Set. This is a set of rules that determine the ultimate source of authority for datasets, CIs, and attributes. The default qualifications are usually good enough for most customers that utilize BMC products and have comprehensive data entry policies that are adhered to.
Just like the other reconciliation activities, the rules put into place here have a huge difference on the outcome. The Merge activity honors a set of precedence rules for each dataset. This is done at dataset, class, and attribute hierarchy levels. Where if a precedence value of 100 is set for the dataset, this setting is applied globally in all jobs for this dataset, unless a precedence for class or an attribute in that dataset is explicitly defined.
If no value is defined, the default value of 100 is considered. A higher value is indicates a higher source of truth. Acceptable values are between 0 to 1000.
The starting point should be to rank your datasets in order from most authority to least authority. For example, the following list shows a sample ranking order:
- BMC Discovery
- Asset Management Sandbox
- Bulk Data Sandbox (DMT)
- Production Dataset
In this example, you would start by leaving the Production Dataset at a default value of 100. Then, set the Bulk Data Sandbox to 300, Asset Management Sandbox to 600, and Discovery to 900. This order allows room to make specific modifications as desired.
For most organizations the out of the box rules are a good place to start, but they will not be sufficient for a mature CMDB. Over time these rules should grow slowly to incorporate the data that is being found from all of the data sources and your organizations level of trust for each of these sources.
Understanding the Purge Action
Purge activities are used to delete CIs based on the Mark as Delete attribute value in a dataset. This attribute is used to delete CIs automatically from the system from lower datasets and ensure those changes are propagated correctly throughout the system.
To understand the Purge process, here is a continuation of the same scenario where an Operating System record is found in three different data sources:
This CI record is imported by using a spreadsheet that Atrium Integrator collected. It failed to identify with any other record so it doesn't have a ReconciliationId. In this example, as shown in the following image, only the Operating System record is imported. The source of the data indicates that this should be set to 'Mark as Delete' = "Yes".
When a Purge activity is run against this dataset, the following steps occur:
- The reconciliation engine evaluates all records to look for instances where the attribute value of Mark as Delete is Yes.
- Since this CI record would not qualify the job criteria, the reconciliation engine will skip purging this CI.
The source of the data indicates that this should be set to 'Mark as Delete' = "Yes". When a Purge activity is run against this dataset, the following steps occur:
- The reconciliation engine evaluates all records to look for instances where the attribute value of Mark as Delete is Yes.
- Since this CI record does not qualify the job criteria, the reconciliation engine will skip purging this CI.
The source of the data indicates that this should be set to 'Mark as Delete' = "Yes". When a Purge activity is run against this dataset, the following steps occur:
- The reconciliation engine evaluates all records to look for instances where the attribute value of Mark as Delete is Yes.
- Since this record qualifies the purge activity criteria, the reconciliation engine will delete this CI.
To add a Purge Activity to a Reconciliation Job
- In a reconciliation job creation screen, click the Add Activity button.
- From the list, select Purge.
When creating a Purge activity, choose from the following settings:
Setting | Value | Additional Information |
Name | Provide any meaningful name. Usually, this is described as the Source Dataset's job. | |
Namespace | ALL | |
Sequence | Set the order in which this activity will run. This sequence number can range from 0 to 1000. The lowest numbered activities are performed first. Tip: Merge activities should always be ran after Identify activities. | |
Status | ✅️ | Enable or disable running this activity. Status indicates if this activity will be run as part of the reconciliation job. This setting is useful when you do not want to run an activity for a short period. For example, if an audit is being conducted you might not want a Purge job to run for the duration of the audit or as per requirement. You can enable it after the audit. |
Continue on Error | ✅️ | Select this option to continue running the job even with certain errors. This option should almost always be selected unless you are debugging a specific job. |
Type | PURGE | Use this to verify the activity that you are configuring. This is a static field that reflects the type of activity selected in the Add Activity button. |
Source Dataset | Select the source dataset that will be evaluated for CI deletions. | |
Selected Dataset | View and confirm the dataset that you selected. | |
Qualification: Use all Classes and Instances | ✅️ | Define the CIs that must be deleted by specifying qualification criteria. |
Purge Instances | Identified and Unidentified | Select the CIs that are eligible for being deleted. Almost always it will be set to Identified and Unidentified. This will ensure the data intended to be deleted is actually removed regardless of its Identify status. |
Verify Soft Deleted in Target Dataset | If this is selected it will not delete the source dataset record until the Mark as Delete flag is set in the Production dataset. This usually will not be selected unless you have a Purge action running out of sync with the Merge and Identify actions. |
Usually, the Purge activity will always be the last activity performed.
For organizations with data retention requirements, this step can be concerning. Here are three different scenarios where we might make considerations about data retention before running a Purge activity:
Scenario | Description |
---|---|
Requirement to retain all data for X amount of years | In such situations, only run purge jobs against the source datasets and leave all data in the production dataset. Use archiving to manage this data with your retention policy. In addition, ensure that the Mark as Delete attribute for any source dataset is set at a lower precedence value than the production dataset. This ensures that no data source is able to delete the required data. |
Requirement to retain some data for X amount of years | In case you want to retain on certain type of data, create your own qualification ruleset and use it to purge data from the production dataset. This ensures that you keep the data that you need, but only the data that matches the qualification criteria is purged. Additionally, you might want some data to be deleted from one data source but not from another data source. For example, you might not want BMC Discovery to automatically mark anything for deletion, but would allow the Asset Management users to change the status and mark records for deletion. This can be achieved by ensuring the specific datasets have the appropriate precedence rules to correctly rank their authority. In the example in the following image, we keep the Computer System CIs but want to delete the IP Endpoint CIs: |
No requirement to retain data for any duration | If your organization has no requirements to keep data, then ensure that the Mark As Delete attribute is configured correctly by using the precedence rules. Create a Purge Action for each source dataset after a Merge activity has completed. Then, create a new reconciliation job with only a Purge activity and run it against the production dataset. |
Reviewing Data Integrity
The last part of Reconciliation is a part that isn't an ordered part of the reconciliation engine at all. Instead, it is about reviewing and auditing the data in the CMDB.
Even the most mature CMDB with the most competent CMDB Administrators need to regularly review their systems to ensure their rules, jobs, and systems are working as required. Usually, after BMC Helix CMDB is configured and it matures over time, these review reports become uneventful. However, if they're ignored for an amount of time, the data will drift to a point where it is unable to provide your ITSM applications with accurate data.
Here are some key reports that should be generated to validate reconciliation is being performed correctly:
Duplicate CIs Report
A key component of CMDB Administration is ensuring that your CMDB doesn't contain duplicate records. As a rule of thumb, the following qualification should be a starting point for all organizations:
This type of qualification only searches in the golden dataset, and then evaluates the Serial Number and Manufacturer attribute values against all other CIs. If this qualification search returns any results, then you can assume that they're duplicate CI records.
Perform the following checks to verify if the CIs are duplicates:
- Sometimes seemingly duplicate CIs might turn out to be unique CI records after closely looking at more information about them.
- Review the CI history and see when it first appeared as a duplicate.
It might be a Computer System that was moved from one datacenter to another and was discovered by a different appliance, that caused one of the identification rules to not match as intended. - It is also possible that someone manually entered either bad or incomplete data.
- Evaluate the identification rules and update them to identify such scenarios.
It is possible that there is no improvement needed and this was a mistake or a one time scenario. It might also be found that this is a repeated issue and needs to be corrected. - The next thing that needs to be done is determining which record is the one true record.
As a rule it should always be the oldest record of the data, and everything created after that date would be a duplicate of the original. - Mark the duplicate record for deletion.
After these steps, clean up the BMC Helix ITSM history.
Most of the time, when there are duplicates, both the CI records will have other relationships. For example, if a Help Desk agent couldn't find someone's workstation they create another one to relate their Incident ticket. This creates a duplicate. The history of the CI is then saved on the original record that the agent wasn't able to find, but this new Incident is related to the duplicate. This creates a break in the history. Correct this issue by moving the relationships. This can also manifest itself by an updated network configuration, software installed, and other relationships. Be sure to check all the relationships.
These steps must be taken for each instance of a duplicate CI that appears on a report. In addition, this is something that should be reviewed at least weekly. If this isn't reviewed weekly, the report might return a large number of records to remediate. This remediation might take longer to recover than is usually acceptable, causing a snowball effect that is hard to overcome.
Failed Identification Report
The most important part of the reconciliation process, identification, needs to be monitored regularly. This can be done by performing a query like this against your CMDB:
This will return any record that has not been reconciled or has failed identification. For example, duplicate CIs that match the identification qualification.
Such instances must be investigated individually for each occurrence, usually by reviewing the reconciliation engine logs and looking for the specific failure. Each solution and situation might be different.
Data Sizing Report
Another key performance metric is the size of the database used by CMDB. This needs to regularly be reviewed. To do so, either discuss with a DBA or review the following BMC Communities post for a built tool to monitor sizing inside of AR System Server: Helix ITSM Support: AR System Schema Overview Console.
Monitor the size of the CMDB forms as needed to determine how your data usage is tracking.
Some key scenarios to watch out for are:
- Unusual or unexplained growth or decline on a specific form, or a group of forms that are used in CMDB.
For example, it is usual to have 10,000 new Base Element records created each month, but you notice 100,000 being created last month. This is a scenario that needs to be investigated and reviewed. - Any unusual proportion difference on a group of forms.
For example, there are 10,000 Computer System records, but 1,000,000 IP_Endpoint records. This would mean 100 IP address for each Computer System, which could be unusual. - Unexplained stagnation on a form.
For example, the Computer System form has had 10,000 new records each month, but this last month had 0 new records. This would indicate there were issues creating new records or that there was a unexplained deletion of records.
The above scenarios are key indicators of issues with marking a record for deletion, purging records, or archiving records.
This needs to be reviewed and remediated immediately. Not only can this cause CMDB to perform poorly, it could cause additional issues with the database, and by extension, the other ITSM application. Also, common functional issues such as relating CIs to Change Request. You might have to sift through a long list of logs, depreciated CIs, or lost data. Each of these should be fully investigated and tracked to ensure the health of your CMDB data.
This should be reviewed no less than monthly. This should probably be extended to review the entire Helix ITSM system, as data issues in Incident Management can cause issues in CMDB, just like CMDB issues can cause problems in Incident Management. The remediation for each issue will be unique to the scenario.
Performance Report
Lastly is reviewing the performance of the system. This is a key component of any modern application. If the application doesn't perform well users are going to either refuse to use it or they're going to find shortcuts in the process to avoid the pain of waiting.
One of the best ways to review and evaluate the performance of your system is to review the built in RLS Autodiscover form. To learn more see, Improving Performance by Using RLS Algorithms.
When reviewing this form you can evaluate all of the longest running SQL statements in the entire AR System. This can be used for not just CMDB forms, but all forms in the system.
Start by reviewing the longest running SQL statements in the system and work your way down to a pre-agreed threshold. By default, the recorded threshold is 5 seconds. But, as a rule, in an active and busy system, anything longer than 15 seconds, should be remediated immediately, and anything between 5 to 14.99 seconds can be resolved at the discretion of the system admin.
The remediation for each issue will be unique for each scenario.
Some solutions might include adding indexes for key forms, changing RLS settings, doing better data management, and implementing new processes. Other times you might realize it was just a bad query such as 'Status' = Closed", by the end user.
This should be reviewed no less than monthly, and can be extended to include the entire ITSM system, as such performance issues can impact other applications or consumers of the CMDB data, as well.