How the performance tests were conducted


As BMC Helix Multi-Cloud Broker provides out-of-the-box integrations with multiple applications, it is important to consider the interaction between the source and destination system while designing the performance test scenarios and test automation scripts. In our testing, the environment included the integration between systems such as BMC Helix ITSM, Salesforce, and Atlassian JIRA Software.   

Microfocus's Silk Performer product was used to simulate BMC Helix Multi-Cloud Broker use cases. For BMC Helix Multi-Cloud Broker performance scenario workload, the following end-user applications were used:

  1. BMC Helix ITSM
  2. JIRA 
  3. Salesforce

As a part of the simulation, the Silk Performer script was prepared in a way that the source and destination system user will be logged in. The source system user performs an action that ultimately triggers a flow in BMC Helix Multi-Cloud Broker was configured to poll the given flows at 3-minute intervals. As a part of the verification, the script checks whether the integration worked as expected and whether the destination system returned the required data properly to the source system. Polling of verification in the simulation script happens every 30 seconds. Error handling in the script is enabled if required data is not received within 6 minutes. The virtual user then marks the transaction as failed and exits.

The following image provides a snippet of the Silk Performer script of DevOps TC1- Create Change. In this flow, the JIRA application creates an issue and expects a Change Request to be included as a part of the label. If the label is found the integration is marked as successful. To track virtual user activity for debugging purposes, custom logging statements are added in the script. Simulation scripts are prepared with user-defined parameters to handle multi-tenancy. 

image2018-7-17_16-53-10.png

Workload composition

The generated workload was controlled by the Silk Performer Controller and executed by the Silk Performer agent. In the current testbed, Silk Performer Controller and Agent were on the same machine. To view the number of users of each type and the number of use case transactions, see Performance-workload-details.

The user ramp-up performed by the Silk Performer agent used the Increasing model as shown in the screenshot below. This workload model is especially useful when you want to find out at which load level your system crashes or does not respond within acceptable response times or error thresholds. 

The following image is for 1000 Users configured with all use cases with Increasing scenarios. 

2008_WorkLoad_Configurations_1000Users.png

Use case metrics

The Silk Performer tool measured the server response times from the client's perspective, that is, from the time the HTTP request is issued to the time the HTTP response is received from the server. This means the total time of a given use case is from the start of the first HTTP request sent to the time the last HTTP response is received for the given use case HTTP sequence. Using this metric, the Java script execution time and the UI rendering time in the browser are not accounted for.

Monitoring during performance tests

System utilization was captured using Perfmon and NMON utilities on Microsoft Windows and Linux systems respectively. JVM monitoring was performed using jvisualvm and gives the data point for any memory leak that is affecting performance over time. 

The following image is a sample JVM snapshot of the BMC Helix Multi-Cloud Broker and BMC Helix Innovation Studio server.

JVM_Snapshot_2008_InnovationSuite1.png

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*