How the performance tests were conducted
As BMC Helix Multi-Cloud Service Management provides out-of-the-box integrations with multiple applications, it is important to consider the source and destination system interaction when designing the performance test scenarios and test automation scripts. In our testing the environment included the integration between systems such as BMC Remedy with Smart IT application, Salesforce, and Atlassian Jira.
Microfocus's Silk-Performer was used to simulate the BMC Helix Multi-Cloud Service Management use cases. For BMC Helix Multi-Cloud Service Management performance scenario workload, three types of end-user applications were used.
- Smart IT
- Atlassian JIRA
- Salesforce
As a part of the simulation, silk-performer script was prepared in a such way that the source and destination system's user is logged in. Then source system's user will then perform an action which will ultimately trigger a flow in BMC Helix Multi-Cloud Service Management. BMC Helix Multi-Cloud Service Management was configured to poll the given flows at three minute intervals. As a part of verification, the script checks whether the integration worked as expected and the destination system returned required data to the source system. Polling of verification in simulation script happens every 30 seconds. Error handling in the script ensures that the virtual user marks the transaction as failed and exit if the required data is not received in 6 minutes.
Below is snippet from silk-performer script of DevOps TC1- Create Change. In this flow, the JIRA application creates an issue and expects Change Request as a part of label. If label is found then integration is marked as successful. To track virtual user activity to debug issues, custom logging statements are added in the script. Simulation scripts are prepared with user-defined parameters to handle the multi-tenant part.
Workload Composition
The generated workload was controlled by the Silk-Performer Controller and executed by the Silk-Performer agent. In current test bed, the Silk-Performer Controller and Agent was the same machine. For information about the number of users of each type and the number of use case transactions, see Performance-workload-details.
The users ramp up performed by the Silk-Performer agent used the Increasing model as shown in the following screen. This workload model is especially useful when you want to find for which load level your system crashes or does not respond within acceptable response times or error thresholds.
The following screenshot is for 450 users configured with all use cases with the increasing scenario.
Use case metrics
The Silk Performer tool measured the server response times from the client's perspective, that is, from the time the HTTP request is issued to the time the HTTP response is received from the server. This means the total time of a given use case is from the start of the first HTTP request is sent to the time the last HTTP response is received for the given use case HTTP sequence. Using this metric, the javascript execution time and the UI rendering time in the browser are not accounted for.
Monitoring during Performance Tests
System utilization was captured using Perfmon and NMON utilities on Microsoft Windows and Linux systems. JVM monitoring was performed using jvisualvm and provides the data point whether there is any memory leak affecting the performance over time.
Below is sample JVM snapshot for the BMC Innovation Suite server.