This documentation supports the 20.02 version of Remedy Deployment.
To view an earlier version, select the version from the Product version menu.

Developing a performance problem statement and collecting performance data

Performance tuning initially requires a clear problem statement, including the following common metrics:

  • Throughput — Frequency of an operation over time, such as the number of tickets created in an hour, or the number of configuration items (CIs) processed in an hour.
  • Response time — Seconds between a significant operation (for example, clicking Save to create an incident ticket) and the time that the operation is completed and system control is returned to the user.

A problem statement should also include a list of recent changes that might have led to the current state. In production-related issues, change control or other sources can often provide a log of system changes that took place before a problem began. Such a change history might not point to the root cause of a problem, but provides valuable background data.

Collecting response time data

Response time might be described subjectively with a statement, such as “the system seems slow.” However, to effectively identify a problem, multiple users with different client configurations should manually time the operations.

To measure response time, collect event times from multiple users who are experiencing the problem. To form a complete picture, capture various data points that impact performance. Record the following data:

  • Client configuration and browser information for each user — Record the browser type and version. For more information, see Optimum client configuration.
  • Client location — Record the latency the user is experiencing at the application layer by using http-ping and tcp ping latency.
  • Timestamp for each timing test — Because caching might cause the first timing test to take longer than subsequent tests, make a separate note of these times for each user.

The following example shows a spreadsheet for capturing response time data. To instruct the testers exactly what to do and what to time, describe each test in detail. All testers should use a equivalent timing device, such as a stopwatch or similar tool.

Example spreadsheet for capturing response time data

For more information about collecting response time data, see Benchmarking for a single user.

Was this page helpful? Yes No Submitting... Thank you