BMC Remedy 19.08 solution benchmark methodology
Test environment setup
To conduct Remedy solution benchmark, the BMC benchmarking team deployed the following applications to a customer like environment in the BMC performance benchmarking lab.
- BMC Incident Management
- BMC Problem Management
- BMC Change Management
- BMC Asset Management
- BMC Knowledge Management
- Remedy Smart Reporting
- BMC Service Request Management
- BMC Service Level Management
- BMC CMDB
- Remedy Single Sign-On
The test environment setup in BMC performance benchmarking lab comprised the following components:
- An F5 load balancer
- Three User facing AR System servers
- One Admin AR System server and One AR System server for FTS
- Three node clustered Remedy Mid-Tier servers
- One node RSSO servers
- One node Reporting server
- One Remedy ECCS server
- One MS Exchange server
Solution benchmarking test scenarios and performance metrics
BMC benchmarking team collected end user response times, server response times, and server resource utilization metrics in the following test scenarios:
Test scenarios | Online user workload (including ITSM, SRM, Reporting) | Number of tenants | Running BMC CMDB processes in continuous mode |
|---|---|---|---|
A | 2100 Users | 1 | Yes |
The BMC benchmarking team collected end user response times for the following key actions:
- Log on to home page
- Open incident in New mode
- Modify incident to resolve
- Create incident (redisplay after submitting)
- Open change in New mode
- Create change
- Open Request Entry console
- Create service request with six questions with mapping
- Search Knowledge for articles
- Search Knowledge for large PDFs
- View Overview console for assigned to All My Groups
Please refer to section Measurement of response times for timing measurement details.
In addition to end user response times, the BMC benchmarking team captured server response times for 102 users action along with all servers' resource utilization performance data.
Workload
For solution benchmark, a mixed workload was added to the system to simulate various workload scenarios. The mixed workload comprised the following:
- Workload from Remedy ITSM users
- Workload from BMC Service Request Management users
- Workload from BMC Knowledge Management users
- Workload from Remedy Smart Reporting users
- Workload from BMC Email Engine (from Remedy ITSM and BMC Service Request Management workloads)
- Workload from BMC Service Level Management (from Remedy ITSM and BMC Service Request Management workloads)
- Workload from BMC CMDB batch jobs, Normalization Engine, and Reconciliation Engine
The nominal workload environment was defined by the distribution of concurrent users and transaction rates among the user scenarios. This nominal workload was used as the baseline for benchmarking the performance and scalability of the Remedy solutions consistently over time.
The workload from BMC Service Level Management and BMC Email Engine was based on the Remedy ITSM and BMC Service Request Management workloads. BMC Service Level Management targets and milestones were triggered by specific Remedy ITSM terms and conditions. Email messages were created based on Remedy ITSM, BMC Service Request Management, and BMC Service Level Management workloads.
The workload from BMC CMDB batch jobs was executed during a 1-hour simulation. The BMC CMDB batch jobs created configuration items (CIs) that were normalized, reconciled, and merged into a BMC.Asset data set.
Nominal Remedy online user workload distribution
The workload spread between Remedy ITSM and BMC Service Request Management applications was split into 63% and 37% of the total workload. The details of this split for Remedy ITSM and BMC Service Request Management are in the tables that follow.
The following table describes the workload split for Remedy ITSM:
The following table describes the workload split for BMC Service Request Management:
BMC Service Level Management and email notification workload distribution
Email notifications were sent when an incident was created or modified, when a service request was created, and when a BMC Service Level Management milestone was missed. BMC Service Level Management targets were also triggered under the same conditions. The following table lists the number of email notifications generated and BMC Service Level Management targets matched for each created incident, modified incident, and created service request. This workload was done automatically on the Remedy AR System server.
BMC CMDB workload
BMC CMDB processes were schedule to run in continuous mode to normalize and reconcile CIs.
During the 1-hour simulation, for each tenant, a total of 7500 CIs were normalized, and reconciled. The following are the schedule of CI data changes:
- 10 minutes after test kickoff, starting to create 750 new CIs (10% of total)
- 20 minutes after test kickoff, starting to update 2240 CIs (about 30% of total)
- 30 minutes after test kickoff, starting to update another 4510 CIs (about 60% of total)
Online user workload Simulation
User session control
The Remedy Applications are user session based applications. A user session begins when a user logs on to Remedy home page and ends when a user logs off. Different type of users has different user log on behaviour. The following is how user sessions are simulated in solution benchmark testing:
- Remedy ITSM users logged on once and performed a specified number of transactions for a business case before logging off. Their user sessions lasted for entire duration of the simulation.
- Other users logged on, performed one business transaction, and then logged off. Their user session lasted for one business transaction only.
This setup perfectly simulates the way in which real-world users use Remedy ITSM, BMC Service Request Management, and BMC Smart Reporting applications.
The load driver and its deployment
To automate workload generation, Silk Performer, a load automation tool, was used to simulate multiple virtual users who concurrently interact with the system under test. Silk Performer simulates user activity by generating HTTP requests and processing HTTP responses. In addition, Silk Performer timer was used to measure server response times for user actions executed in those user scenarios.
In BMC performance benchmarking lab, Silk Performer was deployed to a dedicated 20 CPU cores and 192 GB RAM physical machine which had LAN connection with the F5 load balancer in the test environment setup.