Best practices to convert nonstandard integrations to standard integrations
Use this topic to identify and remediate the integrations that do not meet the SaaS standards or cause performance issues within the SaaS environment.
A nonstandard integration refers to a custom-built connection between BMC Helix applications and external systems or applications that do not have out-of-the-box connectors or pre-configured integration options provided by BMC.
Unlike standard integrations, which use predefined connectors to integrate with popular third-party tools or BMC applications, nonstandard integrations require custom development efforts, advanced API configurations, or specialized middleware to meet unique business requirements.
Some examples of nonstandard integrations include reading or writing data from a file system, executing custom API programs, executing non-BMC-provided binaries, and so on.
For information about best practices to convert nonstandard customizations to standard customizations, see Best-practices-to-convert-nonstandard-customizations-to-standard-customizations.
Custom integrations within the system can come from various sources, including:
- Atrium Integrator jobs
- Direct REST or SOAP Web service calls (inbound and outbound)
- Webhook-based integrations
- Java API-based integrations
- Direct database connections via ODBC
Best practices to identify and convert nonstandard integrations by using the UDM Job Log
The UDM Job Log provides critical details about jobs executed through the Atrium Integrator.
Use this log to identify jobs with the following characteristics:
- Jobs that run longer than 5 minutes and update many records during each execution can overload the system.
- Assess jobs that take an extended time for performance, ensuring they are scheduled during off-peak hours.
- Evaluate jobs that run for a long duration while updating a large volume of data, determining whether they can be executed during non-peak hours.
Use this information to assess whether specific queries require index optimization:
- To determine job durations, use the End Date to identify when a job started and the log date to assess when it was completed.
Run the following queries to monitor jobs that exceed 5 minutes in duration and those updating more than a specified number of rows:
SQL query for jobs runtime (last 90 days):
SELECT Job_Name, Job_ID, Log_Date, End_Date
FROM UDM_JobLog
WHERE Log_Date - End_Date > 300 AND Log_Date > CURRENT_DATE - INTERVAL '90 DAY'
ORDER BY Job_Name;SQL query for jobs updating more than X rows:
SELECT Job_Name, Job_ID, Lines_Updated
FROM UDM_JobLog
WHERE Lines_Updated > 10000;- Use the job names from the preceding output to review their configurations in Atrium Integrator. This review should focus on how data is queried to identify any necessary index reviews.
Additionally, make sure that the jobs perform updates via ODBC appropriately and that all updates utilize the BMC Helix ITSM API.
Best practices for long-running APIs and SQLs
To identify long-running jobs originating from REST and SOAP calls, consult the system forms displaying long execution times for API calls and SQL queries.
- Use the Server Statistics: Longest APIs form to check record server API calls exceeding 2 seconds in runtime.
Perform the following steps to identify long-running APIs:- Log in to Mid Tier.
- Open Server Statistics: Longest APIs form in search mode.
- In the Start Date field, enter the date one month ago. and put > before.
For example, >11/12/2024
- Perform search.
- Export the search data using a report or if the search results are less in number, perform sorting on the UI itself.
- Sort data to find patterns by:
- Parameters (it contains form name) as shown in the example
- User name (there can be specific integration user names)
- Client type (there can be specific integration client type such as UDM)

- (Alternative method) Use SQL queries to check the Server Statistics: Longest APIs record server API calls exceeding 2 seconds in runtime.
Run the following query to filter the results to target Unidentified or Web Service client types with durations over 10 seconds, excluding AR_ESCALATOR and Remedy Application Service calls:SQL query for long-running API calls (last 90 days):
SELECT *
FROM Server_Statistics__Longest_API
WHERE (Client_Type = 'Webservice' OR Client_Type = 'Unidentified Client')
AND Elapsed_Time > 10000
AND Timestamp > CURRENT_DATE - INTERVAL '90 DAY'
ORDER BY Elapsed_Time;
Run the following query to analyze the Server Statistics: Longest SQLs using the same criteria and filtering processes to pinpoint problematic SQL queries related to integrations:
SQL query for long-running SQL calls (last 90 days):SELECT *
FROM Server_Statistics__Longest_SQL
WHERE (Client_Type = 'Webservice' OR Client_Type = 'Unidentified Client')
AND Elapsed_Time > 10000
AND Timestamp > CURRENT_DATE - INTERVAL '90 DAY'
ORDER BY Elapsed_Time;- Run the following queries to get insights into the login IDs associated with the long-running calls, enhancing our understanding of the integrations based on the naming conventions of the integration-related login users:
SQL query for user-grouped long running SQLs:
SELECT COUNT(*), User_x
FROM Server_Statistics__Longest_SQL
WHERE (Client_Type = 'Webservice' OR Client_Type = 'Unidentified Client')
AND Elapsed_Time > 10000
GROUP BY User_x;SQL query for user-grouped long running APIs:
SELECT COUNT(*), User_x
FROM Server_Statistics__Longest_API
WHERE (Client_Type = 'Webservice' OR Client_Type = 'Unidentified Client')
AND Elapsed_Time > 10000
GROUP BY User_x;
- Review the heavy queries on both standard and custom forms to reveal the need for index creation. Flag the queries taking longer than 10 seconds for evaluation to assess index utilization and whether new indexes should be introduced.
Best practices for service interface forms in BMC Helix ITSM Applications
BMC Helix ITSM applications utilize specific forms to facilitate integration processes for ticket creation and updates.
The primary forms involved are:
- HPD:ServiceInterface: This form is mainly utilized for integrating with event management systems.
- HPD:IncidentInterface_Create: This form is employed for various integration purposes.
These forms capture crucial data and provide insights into the login accounts used for integration activities. Additionally, the HPD:ServiceInterface form can help identify integrations related to Business Health Operations Management (BHOM) and Technical Service Operations Management (TSOM). You can derive performance metrics from these forms, such as the number of records created over specific time periods, to analyze integration loads.
Query Examples:
To gather relevant data, run the following SQL queries:
1. Retrieve distinct submitters from the HPD:ServiceInterface:
Retrieve distinct submitters from the HPD:ServiceInterface:
SELECT DISTINCT(Submitter) FROM HPD_ServiceInterface;Count records created by submitters within the last 86400 - 2592000 days for the HPD:ServiceInterface:
SELECT COUNT(*), Submitter
FROM HPD_ServiceInterface
WHERE Create_Date < (SYSDATE - TO_DATE('01-01-1970', 'DD-MM-YYYY')) * 86400 - 2592000
GROUP BY Submitter;Count records from the HPD:IncidentInterface_Create:
SELECT COUNT(*), Submitter
FROM HPD_IncidentInterface_Create
WHERE Create_Date < (SYSDATE - TO_DATE('01-01-1970', 'DD-MM-YYYY')) * 86400 - 2592000
GROUP BY Submitter;Count records from the CHG:ChangeInterface_Create:
SELECT COUNT(*), Submitter
FROM CHG_ChangeInterface_Create
WHERE Create_Date < (SYSDATE - TO_DATE('01-01-1970', 'DD-MM-YYYY')) * 86400 - 2592000
GROUP BY Submitter;
Use these statistics to determine which accounts have been leveraged for integrations over the last 30 days and identify high-load integrations that require further testing and performance assessment in the SaaS environment. Document these results in a tabular format for easy review.
Best practices for outbound REST and SOAP calls via workflow
For workflows, it’s critical to identify filters that execute REST calls and ascertain their operational contexts, especially if they are linked to high-transaction forms. Executing these calls asynchronously might be advisable to prevent delays during ticket creation or update processes.
Identifying REST call filters
To identify relevant filters and the corresponding forms, execute the following query:
SELECT filter.name
FROM filter
WHERE (SELECT COUNT(*)
FROM filter_set
WHERE filter.filterid = filter_set.filterid
AND filter_set.assignLong LIKE '%WEBSERVICE%') > 0;- Make sure that the output from this query indicates filters that invoke REST calls on significant ITSM transaction forms, including HPD:Help Desk, CHG:Infrastructure Change, WOI:WorkOrder, and TMS:Task.
- If you find any identified filters linked to high-transaction forms, investigate further to assess the potential for asynchronous execution.
REST calls involved in creating or updating an incident, change, or problem increase the duration of the entire transaction. This delay affects user experience, as end users must wait longer for their ticket creation and updates to be saved. Additionally, it puts more processing load on the server, as each transaction remains open for an extended period.
Implementing Asynchronous Execution
To transition from synchronous to asynchronous processing, consider the following approach:
- Create an Outbound Integration Form: This form should contain fields necessary for the outbound call, including a status field to monitor the processing state.
Possible values include:- Unprocessed
- Processed
- Errored
- Modify existing filters:
Instead of directly calling the REST or SOAP service, update the filter to push necessary fields to the outbound integration form while setting the status to "Unprocessed". - Establish a new filter for outbound calls:
Create a filter that triggers outbound REST/SOAP calls upon modifying records with a status of "Unprocessed" and where `z1DAction` is set to "RUN". - Set up an escalation mechanism:
Implement an escalation based on the status "Unprocessed" to modify the `z1DAction` to "RUN". - Error handling:
Include an error handler within the form to capture any failures from outbound calls. Log the error details and update the status to Errored. If the call succeeds, change the status to Processed.
By following these steps, you can effectively manage outbound calls while minimizing the impact on end-user experiences and ensuring efficient integration processes.
Best practices for outbound integrations using webhooks
Webhooks serve as a robust method for implementing outbound integrations by making REST calls based on configurations in the AR System Webhook form.
The objective of this analysis is to identify forms with an excessive number of webhooks. By using this method, you can determine if all existing webhooks are necessary or if any can be consolidated.
Data Retrieval Queries
To evaluate configurations, use the following SQL queries:
1. Retrieve webhook Details:
FROM AR_System_Webhook
WHERE Status = 0;
2. Count webhooks per form:
FROM AR_System_Webhook
GROUP BY Form_Name;
3. Organize the results from these queries can be organized in a tabular format. These results provide insights into which forms are associated with webhooks and the quantity defined per form.
Best practices for ODBC-based integrations
You can opt for integrations that interface directly with the database, utilizing methodologies like ETL (Extract, Transform, Load) processes, reporting tools, or leveraging Atrium Integrator. While no specific logs are available for automatically identifying these queries, it is crucial to engage in discussions with BMC to pinpoint all instances of direct database access.
In the context of Atrium Integrator jobs, assess whether these jobs can be modified to procure data using API connectors. Furthermore, when transitioning to a Software as a Service (SaaS) model, ODBC queries should be directed toward the reporting database.
ODBC connections are for querying data. Do not use ODBC connections via Atrium Integrator or direct database calls to modify or create any data in the system, as that will bypass the business logic in the application and can result in data corruption.
Best practices for integrator job performance
For optimal performance of Atrium Integrator, refer to the comprehensive best practices outlined in the topic Atrium-Integrator-and-Data-Management-jobs-best-practices.
Several common use cases warrant attention, especially for new SaaS customers who need to synchronize data between their SaaS instance of IT Service Management (ITSM) and on-premise systems. This synchronization is often important for:
- Ensuring local access for on-premise integrations and reporting.
- Capturing data in an on-premise big data repository to execute proprietary AI algorithms.
- Facilitating integrations that require restricted access to local data, shielded from public internet exposure.
You may utilize various tools for this database synchronization, including their own ETL solutions or Pentaho to construct the necessary transformations. Regardless of the tool employed, we recommend that you adhere to established best practices throughout the synchronization process.
Best practices for syncing data
You might need to synchronize some of your data from your BMC Helix ITSM environment in the cloud to your local system on premises. For more information, see Best Practices when syncing data from BMC Helix ITSM SaaS to on premises.
Best practices for integrations that use external database tables
In specific scenarios, an integration may require access to data stored in tables not managed as AR Form objects. In such cases, direct SQL operations may be necessary to retrieve this data as part of the integration process.
To optimize the handling of these tables, we advise you to manage them within the AR System, allowing integrations and integration-based workflows to interact with them as primary AR System objects. The preferred method is to create the external tables as AR forms and update the integration and workflows to interact with these forms. This approach integrates the tables as part of the application framework.
If maintaining the separation of the table from the AR System is essential, an alternative solution is to create an AR System view form that overlays the custom table. This method retains the existing data storage while providing similar advantages as the previous solution, enabling interactions through the AR System.
In both scenarios, you must modify the code and workflows to engage with the AR System form instead of directly accessing the underlying database table. Implementing either approach grants the capability to apply permissions at various levels, including AR Object, field, or data row levels.