This documentation supports the 19.08 version of Remedy and applies only to the on-premises deployment model.

To view an earlier version, select the version from the Product version menu.


Determining workload

In addition to designing the test scenarios and deciding what data to use, you must determine the number of users for running the scenarios and how fast the scenarios are executed. Service level agreements help with planning. The workload drives the performance test, and the benchmarking goals drive the workload setup.

This topic includes the following guidelines for determining the workload:

Set transaction pacing for throughput

Transaction pacing is the number of transactions that an end user performs in an hour. This pacing produces the throughput for the entire performance test.

Throughput is the how fast aspect of the workload — the speed at which scenarios are run. To set the speed for one user performing a scenario, use one script for each scenario. Each scenario can have a different throughput.

For example, for three scenarios — create, search, and view. Creating is the action that is performed most often. After gathering statistics, you find that approximately 100 entries are created every hour in your production environment.

Set the user percentage mix

The percentage of users running scenarios is the how many aspect of the workload. To compare when users are scaled, use a percentage of the total users instead of a fixed number. For example, if 20 percent of the total users perform the Create action, that is 100 of 500 total users and 200 of 1,000 total users.

Also, ensure that users perform a realistic mix of View and Create transactions, such as 70 percent View and 30 percent Create.

Ensure that test users have the same privileges as real-life users. For example, do not give administrator privileges to end users.

The following table is an example of a summary for planning transaction pacing and user percentage mix for the Service Request Management and Incident Management applications:

Transaction pacing and user percentage mix table

Transaction name

Percentage of users

Transaction pacing
(transactions/user/hour)

Global search

8

5

Update change

2

2

Search change

2

5

Create change

2

2

View service request

10

2

Update incident

10

6

Create service request

12

3

Browse service categories

2

5

Create incident

9

6

Search for incidents

15

6

Lookup service context

1

9

Update work order

9

6

Search service request

15

2

Knowledge search and view

3

3

Determine caching requirements

BMC Remedy AR System uses caching in the mid tier and AR System server. Caching is also done in the web browser. In a load testing environment, browser caching can be simulated. When a script repeats during a performance test, consider each end user who logs back on as a return user. In a real-life company, most employees log on to an application once a day, and do not clear their browser cache at the end of the day. When browser caching is used, it reduces the number of HTTP requests and network traffic. Items such as images do not need to be requested each time a page is displayed.

In the mid tier, caching is done regardless of whether browser caching is enabled. Mid tier caching is done by AR System group permission, and the first access always takes the longest time. To fill the mid tier and AR System server caches, run a sanity check before a performance test and ramp up users.

Generate throughput by using wait times

Workload pacing can also be achieved by using user wait times. User wait times belong in the test scenarios. Ensure that wait times are designed to produce the assigned throughput.

Successively ramp up users

Ramping up users one after another, instead of all at once, paces their logons to your application at the beginning of a performance test. Use successive ramp-ups to prevent overloading your test environment, build up your cache, and avoid scenario lockstepping.

The following figure shows a 600-user ramp-up at a pace of one user every three seconds. The flat line is the steady-state period.

Ramping up users to a steady-state period

Maintain a steady state

Ramping up leads to a steady-state period in which every user has a chance to log on once. During the steady-state period, make your measurements. Depending on your throughput, maintaining at least a one-hour steady-state period provides a good set of data points. The steady state stops before users start ramping down to end the test.

License applications and users

All BMC applications require a license, and various users require different types of licenses. Ensure that sufficient licenses are available for the workload.

The load generation tool also requires licenses and might restrict the number of virtual users. Verify that the load generation tool does not limit the workload.

Was this page helpful? Yes No Submitting... Thank you

Comments