This documentation supports the 19.08 version of Remedy and applies only to the on-premises deployment model.

To view an earlier version, select the version from the Product version menu.


Developing test scenarios

Test scenarios are hypothetical use cases on which scripts for performance benchmarking tests are based. When developing test scenarios, use the following guidelines:

Start small

Even though your organization might deploy several service support and delivery applications, focus your first performance benchmarking project on one application. Subsequent benchmarking projects can focus on application integration.

For example, the applications in the BMC Remedy IT Service Management (ITSM) Suite product are designed to function together and with other BMC products, such as BMC Atrium CMDB, BMC Service Level Management, and BMC Service Request Management. Benchmark each application individually if possible. While designing your tests, remember that eventually all of these applications are to be integrated into one test. The individual tests should share the same set of foundation data, such as people, company, locations, regions, and product catalogs.

Select typical business transactions

Most applications perform many actions, and you could write test scripts for each action. Instead, look at common actions that best simulate how your organization uses an application.

For example, in BMC Service Request Management, users update their application preferences but not on a daily basis. Instead, users frequently use the Service Request Console as the starting point for most activities. Scenarios that include Service Request Console interaction have greater benchmarking value than scenarios that include preference changes. Use the "daily basis" rule to reduce the number of scenarios that you include in your tests.

Also, consider the type of users involved in each scenario. For example, scenarios can include administrators or support users. In Incident Management, support users can create tickets on behalf of end users and search for, modify, and resolve incidents. Support users are more active than administrators. Scripting support actions instead of administrative actions is a better way to benchmark typical use for this application.

Imitate real-life usage

Effective test scenarios reflect how users actually use their applications.

To determine how applications are used in a production environment, look at the web server logs. For new applications, typical actions include creating, searching for, modifying, and viewing records.

For example, end users best access the BMC Service Request Management application only through the Service Request Console. They can then create service requests, view their open requests, modify their open requests, search for service requests, modify their preferences, and provide feedback. In your test scenarios, mirror the steps that real-life users perform.

Also, match the application setup data and configuration with your production implementation. For example, in BMC Service Request Management, configure the catalogs, entitlements, and service questions.

Ensure that the test environment has production data volume. Test scenarios and environments that parallel actual implementations yield more accurate results. For more information, see Evaluating volume with production or synthetic data.

Write a script for each scenario

Ensure that each test scenario has its own script. This gives you better control over how many virtual users execute each script.

For example, in an ITSM performance benchmarking test, support users can modify an incident to resolve status, search records via global search, create an incident, create a change, search for a change, modify a change to closure, update a work order, and search and view Knowledge Base articles. Each of these actions can have its own script.

Prepopulate transaction data

Some actions, such as modifying, require preexisting data. For example, in Incident Management, support users can view and modify their assigned tickets. These scenarios require tickets to preexist or to be generated dynamically during performance tests. Consider the need for preexisting tickets when designing the test database. For more information, see Evaluating volume with production or synthetic data.

Generating tickets during a view or modify scenario requires a create action. In this case, preexisting data is preferable because it follows the autonomous scenario rule: one main action per script. For production data, determine the support users with assigned tickets. For more information, see Evaluating volume with production or synthetic data.

Parameterize user input

Ensure that test scripts are portable. Portable scripts are not tied to specific users, servers, or input.

Note

Parameterize some of the data that end users supply.

Make the data used to query other information dynamic to ensure that cached data is not used.

For example, all ITSM applications require a web server host name and port number, an AR System server name, and a logon user name and password. Hard-coding these values into test scripts makes it difficult to change test environments. To make scripts portable so that they can be easily changed, store these values in program variables. Also, store script input data in text files or program variables.

Control random user input

Although scripts typically include random input to simulate variability, control randomness to achieve a known output or a known output range. This control is especially critical for searches. To ensure that searches do not return unlimited results, design your baseline data appropriately. Controlled input data also makes script runs repeatable.

For example, when a support user logs in, the overview console lists all tickets assigned to the logged-on user. In a synthetic data environment, ensure that each of the support users used does not have thousands of assigned tickets. In a production environment, it is unlikely that a support user would have thousands of assigned tickets.

Determine transaction contents

A load-generation transaction is a set of actions. When developing test scenarios, determine which actions belong in each scripted load-generation transaction.

Although a script typically has only one main action, several steps — such as logging on — are usually required to get to that action. In cases like this, make the following decisions:

  • Whether all the steps belong in one transaction
  • Whether the transaction repeats throughout the performance test

For example, a support user in an Incident Management performance benchmarking test might not need to log on at the beginning of each iteration. In real life, a support user logs on once and stays on for the rest of the workday. An end user often logs on, performs a task, and then logs off. For support users, the repeated transaction consists only of the main task, not one-time actions such as logging on and off.

Include wait times

Script a scenario exactly as end users manually perform it. Include all the steps leading up to the main action. Also, insert wait times — periods when users are idle while thinking or while waiting for applications to process their commands — throughout the script. Wait times can be a range of minimum and maximum values and correspond to the throughput rate. For more information, see Determining workload.

Test scripts thoroughly

Test all scripts before you use them in performance tests. Scripts typically run for multiple users and multiple iterations during a performance test. Run the test as follows:

  1. Run a single iteration for a single user.
  2. Check for errors.
  3. Run multiple iterations for a single user.
  4. Check for errors.
  5. Run multiple iterations for multiple users.
  6. Check for errors.
Was this page helpful? Yes No Submitting... Thank you

Comments