This documentation supports the 20.08 version of BMC Helix Multi-Cloud Service Management.

To view the documentation for the previous version, select 20.02 from the Product version menu.

How the performance tests were conducted

This topic was edited by a BMC Contributor and has not been approved.  More information.

As BMC Helix Multi-Cloud Broker provides out-of-the-box integrations with multiple applications, it is important to consider the interaction between the source and destination system while designing the performance test scenarios and test automation scripts. In our testing, the environment included the integration between systems such as Smart IT, Salesforce, and Atlassian JIRA Software.   

Microfocus's Silk Performer product was used to simulate BMC Helix Multi-Cloud Broker use cases. For BMC Helix Multi-Cloud Broker performance scenario workload, the following end-user applications were used:

  1. Smart IT

  2. JIRA 

  3. Salesforce

As a part of simulation, the Silk Performer script was prepared in a way that the source and destination system user will be logged in. Then source system user performs an action which ultimately triggers a flow in BMC Helix Multi-Cloud Broker. BMC Helix Multi-Cloud Broker was configured to poll the given flows at 3 minute intervals. As a part of the verification, the script checks whether the integration worked as expected and destination system returned required data properly to source system. Polling of verification in the simulation script happens every 30 seconds. Error handling in the script is enabled if required data is not received within 6 minutes. The virtual user then marks the transaction as failed and exits.

The following image provides a snippet of the Silk Performer script of DevOps TC1- Create Change. In this flow, the JIRA application creates an issue and expects Change Request to be included as a part of label. If the label is found the integration is marked as successful. To track virtual user activity for debugging purposes, custom logging statements are added in script. Simulation scripts are prepared with user defined parameters to handle multi-tenancy. 

Workload composition

The generated workload was controlled by the Silk Performer Controller and executed by the Silk Performer agent. In the current test bed, Silk Performer Controller and Agent were on the same machine. To view the number of users of each type and the number of use case transactions, see Performance workload details.

The user ramp up performed by the Silk Performer agent used the Increasing model as shown in the screenshot below. This workload model is especially useful when you want to find out at which load level your system crashes or does not respond within acceptable response times or error thresholds. 

The following image is for 1000 Users configured with all use cases with Increasing scenario. 


Use case metrics

The Silk Performer tool measured the server response times from the client's perspective, that is, from the time the HTTP request is issued to the time the HTTP response is received from the server. This means the total time of a given use case is from the start of the first HTTP request is sent to the time the last HTTP response is received for the given use case HTTP sequence. Using this metric, the java script execution time and the UI rendering time in the browser are not accounted for.

Monitoring during performance tests

System utilization was captured using Perfmon and NMON utilities on Microsoft Windows and Linux systems respectively. JVM monitoring was performed using jvisualvm and gives the data point for any memory leak that is affecting performance over time. 

The following image is a sample JVM snapshot of the BMC Helix Multi-Cloud Broker BMC Helix Platform server.


Was this page helpful? Yes No Submitting... Thank you

Comments