Best practices to convert nonstandard customizations to standard customizations
A nonstandard customization is a change made in the system without using a BMC-provided or recommended mechanism or API. Having nonstandard customizations in the system can cause functional or performance issues during upgrades or regular system use.
Use this topic to review the best practices to convert nonstandard customizations to standard customizations. This topic also includes best practices to improve the performance of workflows, escalations, filters, and applications.
For example, the following nonstandard customizations and integrations are not supported on BMC Helix environments:
- Providing direct access to update the database forms or tables
- Processing files and scripts through workflows
- Viewing forms or tables
- Running direct SQL updates through workflows
Also, the following customizations affect the performance of applications and workflows:
- Escalations that update huge amounts of data in a single transaction
- Queries that do not have valid qualifications or do not use appropriate indexes
Best practices for active links and menus
Active links are client-side workflows that communicate with the server to execute various business logic. However, several factors can impact performance and security, requiring careful review.
It is important to review the number of queries and updates (push fields and set fields) that active links are performing as multiple calls from the client to the server can result in performance issues. This is because each round trip call has to go over the internet and any network latency will have a cumulative effect in making the client seem slow. A more efficient approach for SaaS is to move all queries to the server and use the Service construct in the AR System to handle these queries and return results efficiently.
Additionally, direct SQL calls from Active Links and menus are never supported in SaaS due to security concerns; these calls can be intercepted and potentially perform actions that can breach the application security. If your application requires direct SQL functionality that other mechanisms cannot support, you must redesign the SQL in active links to utilize Service actions running on the server.
You must:
- Obtain the necessary approvals from BMC.
- Change the direct SQL statement to filter set fields or push fields.
- Avoid using active links when making multiple calls to the server to retrieve data.
You must perform queries by using filters on the server side. Make sure that the active link makes a service call to a form containing these filters and subsequently returns the results. - To improve active link performance, simplify the qualification for active links and combine active links that use the same qualification.
This method is more efficient than designing two identical active links except for their Execute On selection.
For example, you might want your users to click a button or press Return to open a selection menu list. Design both of these Execute On actions in the same active link. To improve filter performance, simplify the qualification and combine filters that use the same qualification.
Best practices for workflows
Best practices for direct SQL commands in a filter workflow
Do not use direct SQL commands in filter workflows. Instead, consider rewriting the direct SQL so that it does not run during an update.
Incorporating direct SQL commands into workflows poses a risk because these commands might try to modify records that a transaction is already updating. To avoid this risk, use a workflow pattern that prevents the workflow from running during an update.
If direct SQL is necessary for a particular action, you must approve it as an exception and write it carefully by following these guidelines:
- When you run the SQL statement on a PostgreSQL database, include the keyword PARENT TRANSACTION before your SQL statement.
This keyword makes sure that your statement executes within the current transaction.
Review what and how often the data is updated. Avoid direct SQL statements in filter workflow because both the direct SQL statement and the workflow update the same record repeatedly, leading to record locking. Be careful with how the workflow is written and how the updates are made.
The following scenario is an example of a common problem:
Best Practices for filter workflows running external scripts
BMC Helix SaaS does not support filter workflows that run scripts or processes on the server or save data on the server. If converting direct SQL to workflow is not possible, then you can convert the functionality into a plugin.
First, try to rewrite the direct SQL to function as a workflow. If you are unable to write the existing workflow constructs to meet the requirement, write a custom plugin or coded bundle to perform the actions in the context of the server.
Use the following information to create a custom plugin container for your plugins:
BMC Helix SaaS does not allow nonstandard customizations that call external scripts such as bat, shell, or Perl.
Use the following information to include this kind of customization:
Best Practices for workflows making external calls
When creating transaction data and performing external, internal, or resource-intensive calls, you must run these external calls asynchronously to avoid delays in submitting or updating the ticket.
Sometimes, you might overlook that certain processing tasks might be resource-intensive or involve external calls, leading to workflow processing delays.
Examples of such tasks include setting fields that perform REST or WebService calls or making push fields or service calls that require significant processing.
Best practices for escalations
Follow these guidelines to help you design efficient escalations:
- Use the minimum number of escalations required for your workflow.
Run escalations with qualifications that use indexed fields when possible.
For more information about indexed fields, see Table field indexing considerations.- Streamline your escalations by including all available criteria in the qualification.
Unqualified escalations run against every record for the form, and might process some records unnecessarily. - Avoid running escalations during peak user load times.
- Stagger long-running escalations in different pools to avoid overloading the system.
- Avoid running long-running escalations on the default pool, which runs a lot of out of the box (OOTB) small escalations, which otherwise might get delayed.
- Avoid running conflicting escalations (operating on the same data set) simultaneously in different pools.
- Allow escalations the time they need to complete before the next escalation activates.
An example is an escalation that searches the database for 30,000 requests but is set to execute every minute. Escalations are processed in sequence, and an escalation will not run until the escalation scheduled immediately prior to its runtime has been completed. - Use the escalation log to identify the times escalations run, how long they take to complete, and the types of actions your escalations perform.
Remember that an escalation can modify a request. You can help maintain system performance by minimizing the impact of blocking operations. A blocking operation is an action performed during filter processing that waits for a DBMS or an external process to return the requested information. Blocking operations are caused by Set Fields filter actions, Push Fields filter actions, and $PROCESS$ actions that retrieve information from a DBMS or an external process. - Allow escalations to run against each change form individually, ensuring more manageable database transactions.
The following guidelines will help you ensure more manageable database transactions:- Move the escalation layer down a level to mitigate large transactions and deep filter execution stacks.
Updating a single record during escalations can lead to numerous updates across different levels. This process often results in large transactions and the execution of deep filter stacks, which can be inefficient and problematic. - Carefully consider data volume when implementing asynchronous processes to avoid issues. Ignoring data volume during these processes can lead to issues.
For example, a process created for updating all planning status changes might trigger integrations through workflows that sometimes update other change requests. Typically, the system updates a single record, such as a SYS: Action record, activating a push field that updates all necessary changes. - However, this approach has drawbacks; a single error can cause the rollback of the entire transaction because all updates occur within one transaction. Additionally, it can overload the server with too many filters and lead to long database transactions, affecting performance.
- Move the escalation layer down a level to mitigate large transactions and deep filter execution stacks.
Best practices for using custom database views
In BMC Helix SaaS, custom database views are not supported out of the box. To use custom database views, you must rewrite the custom database views by using a standard workflow. If you still want to use custom database views, you must get the necessary approvals from BMC for using a plugin to perform this action.
Custom SQL database views might operate differently depending on the database they originate from. Additionally, you might have used specialized stored procedures, which can lead to issues. Avoid utilizing custom database views unless there are no other methods to retrieve the necessary data.
Stored procedures and triggers are not allowed, and you must convert these procedures and triggers into workflows to occur within the platform context.
Reasons to use custom database (DB) views can be:
- When you want to join ITSM tables with external tables.
- When you need to create a complex join in a performant way.
Issues that can arise with custom database views are:
- The version or database vendor used for the on-premises system might differ from the SaaS system, causing the view to operate differently or less efficiently.
- Application-defined permissions to the data might be bypassed.
- Data might not be indexed correctly, leading to performance issues.
Access each custom view to determine if there is a way to implement the same functionality within the platform itself.
Some best practices include:
Incorporating external tables as forms instead of as tables not managed in the platform.
Stored procedures and triggers are not allowed, and you must convert these procedures and triggers into workflows to occur within the platform context.
Best practices for custom join forms and custom fields
- Take note of the following practices when you create join forms:
- Do not create multiple layers of join forms.
If you create multiple layers of join forms, you might see a decline in the speed of database and system performance. - Make sure that you have created joins on the indexed fields.
Joins on non-indexed fields will slow system performance.
- Do not create multiple layers of join forms.
- Maintain a minimum number of diary fields.
Performance decreases when character field size exceeds 255 bytes (4000 bytes for the Oracle database). The impact on performance for a form increases with the number of diary fields. You can design most AR System applications effectively by using one or two diary fields. - If you maintain multiple form views with trim or control fields, do not duplicate screen objects unnecessarily and, when possible, share screen objects between views.
The more screen objects you create (data fields, control fields, and trim), the larger your forms will be and the longer it takes to load, display, or switch to another view. - Avoid using many toolbar buttons with different bitmaps in multiple views; this also increases the form size.
- If you need to include an image, use a JPEG instead of a bitmap.
The file size is generally smaller for JPEG files, and the form will take less time to load. - Build custom joins and custom fields in a way that optimizes performance.
Sometimes, you might create custom joins and fields for use in workflow, reports, integrations, etc.
Some common issues when creating custom joins and fields include:- Custom joins are built without considering depth and indexing.
- Lack of indexing on custom fields, resulting in slow queries.
- Identify potential bottlenecks by using platform's server statistics, specifically the sections for Longest SQLs and Longest APIs, and strategize on improving their performance.
The platform captures any SQL or API calls taking longer than 5 seconds in these sections, allowing you to pinpoint queries that run the longest. For Postgres and Oracle, obtain the query plans to help identify where you can add indexes to speed up queries. - Avoid using workflows that interact with the file system.
- Do not run process commands to run scripts.
- While interacting with FTP sites for data transfer is generally permitted, do not use the file system to write files within the application's business logic.
Additionally, filters cannot run scripts on the file system. - Review the custom joins and forms to ensure that they are not unnecessarily FTS indexed.
You might accidentally copy and paste from other forms that have FTS properties set, leading to fields being unnecessarily indexed. You must review and confirm whether a field needs to be FTS indexed, as unnecessary indexing adds extra load to the server.
Best practices for Set Fields and Push Fields actions
Avoid blocking operations when possible because they can affect all users, and blocking operations typically are not scalable. However, you might need to use blocking actions for some processes.
For more information about these actions, see:
The following section describes ways to minimize performance issues when using one of these action types.
- Use filters instead of active links to perform Set Fields and Push Fields actions, especially if the active link Execute On condition is Submit or Modify.
The advantage is that the server (filter) should perform the Set Fields action faster than the client (active link). For example, an active link that performs a Set Fields action on submit pulls information from the server only to push that information back. Your system performance will improve if you use a filter to perform the Set Fields action on the server. - Use only efficient searches in these actions, especially if the workflow executes the search frequently.
Efficient searches define where the system looks for the data (usually using an index). You can improve performance by designing actions to retrieve only the necessary columns. This practice is especially true when the excluded columns are diary fields or attachment fields. The biggest performance improvement, however, still depends on how well the search is defined. - Do not perform Set Fields actions in a filter if the user must see the data retrieved by the Set Fields action prior to the Submit or Modify operation or if the data retrieved by the Set Fields action depends on the client-based workflow.
- Limit the use of Set Fields and Push Fields actions that include database searches or other external blocking actions.
- To improve $PROCESS$ action performance, you must have one $PROCESS$ action execute one resource-demanding command and return the results to a temporary field
The remaining actions can retrieve and parse data from the temporary field. For example, if you set five fields, write this data to a temporary field with the first $PROCESS$ operation and have the remaining actions retrieve the data from the local field. A better solution is redesigning the process to use the Filter API.
For more information, see Developing an API program. This technique uses one long-running process, making it more efficient and significantly faster.