BMC Helix Innovation Suite Applications benchmark summary
The objective of this benchmark is to characterize the performance and scalability aspects of various BMC Helix Innovation Suite Applications under various field conditions. For a realistic simulation, the benchmark workload includes online workloads. For details about the workload, refer to the Definition of Performance Benchmark Use Cases.
A supporting environment was built, and the IT infrastructure was used in this benchmark to depict a typical deployment architecture, such as the high availability requirement of all the applications.
The workload simulations include the following:
- A mixed workload simulating ITSM, Service Request Management (SRM), Helix Dashboard, Smart IT, Progressive Web Application(PWA), LiveChat, Digital Workplace Catalog, Business Workflow, HelixGPT, and ChatBot users, along with the CMDB processes running in continuous mode
The supporting lab environment includes the following:
- An F5 load balancer
- Three pods (HA) of the RSSO
- Two pods (HA) each of the MidTier, Smart IT, DWP, Virtual Chat Plugin, Virtual Chat Server, Openfire, Atriumplugin, Catalog-itsm-plugin, Emailengine, Reportplugin applications.
- One pod of each clamav,cmdbdsmsyncplugin,helix-gpt-assistant,helix-gpt-data-connection,itsmplugin,normplugin,platformplugin,reconciliation-engine,reconciliation-idservice,reconciliation-mergeservice,rkmplugin applications
- A total of ten pods (HA) of Platform (AR System servers) are configured as an AR Server Group: Two pods of platform FTS, INT, and SR each and four pods are user-facing servers (platform-usr)
The following are the key takeaways of the benchmark results:
- For the 3600 concurrent users, mixed (online and batch) workload benchmark, the average end-user response time per use case is within the SLA of 5 seconds.
Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*