Differences in behavior of applications when using PostgreSQL and other databases
When you move the database engine from Microsoft SQL Server to PostgreSQL database, you might observe some changes in the behavior of your applications, primarily due to DB collation differences.
The following table describes some of the known changes in behavior in BMC Helix IT Service Management applications due to the change in the database engine layer:
|Using Microsoft SQL Server
Limitation / customer experience after upgrading from version earlier than 21.x to version to 21.x and later
|Resolution / Workaround
|Sorting data containing special characters in ticket summary
If the data being sorted contains special characters, such as hyphens, brackets, and so on, such data is sorted differently in different databases.
For example, the ticket summary contains the following data:
In PostgreSQL with default settings, sorting on the Summary field in Ascending order sorts and displays the data as follows:
In Microsoft SQL Server with default settings, sorting on the Summary field in Ascending order sorts and displays the data as follows:
|The data can be sorted differently on the UI or workflow after upgrading to version 21.x and later.
If data sorting is not acceptable for business reasons, use one of the following options to resolve the issue:
|Accent sensitive searches
Search results based on accented characters works differently in Microsoft SQL Server and PostgreSQL.
If searches use qualifications (WHERE clause in database) that rely on accent characters, the search results might be different in Microsoft SQL and PostgreSQL.
PostgreSQL database does not support accent insensitive collation searches.
Hence, similar sounding words, such as “èvan” and “evan” are considered as different strings.
Best practice: We recommend you to use FTS indexes so that accent insensitive setting of databases are not used.
Microsoft SQL Server supports accent insensitive collation in searches.
Hence, similar sounding words, such as “èvan” and “evan” are treated as the same string.
|Accent sensitive searches will not produce expected results as explained in the PostgreSQL example.
If it is important to treat accented strings in the same manner as non-accented strings in searches, enable FTS indexes on those fields. FTS indexes have a configuration to enable accent insensitive searches.
|Handling ASCII NUL characters
|You run a UDM job to sync LDAP user data into Helix system. One of the fields in LDAP user records is binary, which contains an ASCII NUL character. This LDAP binary field is mapped to one of the character fields on the CTM:LoadPeople form.
|ASCII NUL characters are represented as \o.
PostgreSQL does not allow null byte ('\0') in a string on char/text/varchar fields. If you try to store a string containing null bytes, you receive an error.
|Microsoft SQL Server and other databases allow null byte ('\0') in a string on char/text/varchar fields.
ASCII NULL characters will not get loaded from external data entry.Any existing data containing such characters will be lost during migration.
Such character cannot be loaded in the target PostgreSQL database, so modify the calling program/workflow to stop sending this character.
If there was ASCII NULL character pre-upgrade, fix it in source to remove it and then perform migration.
|Attachment size difference
|When you are uploading files to the attachment fields in a server, maximum file size limit is different for files for Microsoft SQL Server and PostgreSQL databases.
|In a PostgreSQL database, the limit for storage in a row is 1 GB. Typically, the file size should be less than 1 GB.
|In a Microsoft SQL Server database, the limit for storage of files is 2 GB.
Attachments or data with cumulative size of 1 GB will not be accepted in the system.
Attachments that are larger than 1 GB at source before migration will be lost in migration.
Do not upload attachments that are larger than 1 GB.
BMC will provide a list of entries that might contain attachments that are larger than 1 GB. You must download such attachments from the source system prior to migration and upload them elsewhere, such as internal FTP site or OneDrive, and provide links in the ticket for end users.
|Search for backslash (\) character
|Searching for backslash (\) character in Microsoft SQL Server does not return correct results unless it is escaped, because \ is a default escape character in PostgreSQL.
If the data contains single backslash character for example. (onbmc\user) and it needs to be used in WHERE clause of a query, it has to be escaped in PostgreSQL.
If the data contains single backslash character, for example, (onbmc\user) and it needs to be used in the WHERE clause of a query, it can be used as it is in MS SQL.
|If there are custom workflows that use direct SQL and make use of the backslash character, it will not return the expected results after migration.
Fix the custom workflows to make use of the correct qualification based on database type. Use $DATABASE$ keyword in the workflow qualification to ensure foolproof behavior that is independent of the database.
Creating or modifying data on a View form fails in PostgreSQL
Creating or modifying data on a View form fails if you use a View form that fetches data from a Database View. This issue occurs in PostgreSQL database when the Database view is not automatically updatable. See the information on Updatable Views in the .
The following table describes the differences in behavior of BMC Helix IT Service Management applications when using PostgreSQL and SQL Server, the limitations, and the workaround:
|Using Microsoft SQL Server
Limitation or customer experience after upgrading to version 21.x and later
Most Database views are not updatable by default. Views with joins or those using aggregate functions are not updatable in PostgreSQL.
Many views may be updatable by default because SQL Server Database Engine automatically finds and maps the update to underlying tables.
|Direct SQLs from workflows that were directly updating the database views might fail due to the PostgreSQL limitation.
Change the logic in the workflow to use better constructs, such as Set Field or Push Field action, wherever possible.
Best practice to improve overall DB performance
Best practice for creating or updating workflows, and indexes
We recommend the following best practices while working with databases:
- AR Administrators who create and update workflows must use ANSI SQL in Direct SQL actions.
- For database specific SQLs, make sure that the workflow checks for $DATABASE$ keyword value and write the workflow accordingly so that it works for all databases.
- While creating any indexes on forms, use generic rules that work with all database types, such as limit the number of fields, total length of the index, and so on. Not implementing these rules might result in indexes not being created and performance might not be as expected.