This documentation supports the 18.08 version of Remedy Deployment.

To view the latest version, select the version from the Product version menu.

Checklists for benchmarking

This topic provides the following checklists for your benchmarking tests:

Designing tests

Goal

Tasks

Getting started

Set goals

Test scenarios

  • Use one application per scenario
  • Select typical business transactions
  • Script actual use cases exactly as executed by users
  • Write autonomous scripts for each scenario
  • Prepopulate transaction data
  • Parameterize user input
  • Control random user input
  • Determine transaction contents
  • Include wait times
  • Test scripts thoroughly

Volume data

  • For production data:
    • Determine users who have assigned tickets
    • Determine users with proper permissions
    • Modify user passwords
    • Determine search criteria
    • Back up the database
  • For synthetic data:
    • Plan data creation to support test scenarios
    • Determine optimal database size
    • Determine optimal row entry size
    • Vary data generation order
    • Verify that data is realistically distributed
    • Determine data model and proper attributes for CIs
    • Back up the database

Workload

  • Set the throughput for each scenario
  • Set a percentage of users per scenario
  • Determine caching needs
  • Use wait times to generate throughput
  • Successively ramp up your users
  • Run at least one hour of steady state
  • License your applications and users

Metrics

  • Measure roundtrip response times for critical actions
  • Measure throughput
  • Gather system resource statistics
  • Gather database statistics

Setting up environments

  • Support your benchmark test goals
  • Isolate the test environment from other activities
  • Select the appropriate software, hardware, and network
  • Start with a basic configuration
  • Use one system per tier

Executing OLTP with batch tests

  • Write a checklist to ensure that your test can be repeated
  • Restore the database to its baseline data
  • Restore all tiers to their initial state, remove old logs, and then restart the application
  • Ensure that no other activities are on the system
  • Set basic system and software configurations, and capture system and software configurations before you run a test
  • Run a sanity check on all scripts
  • Start normalization and reconciliation continuous jobs for processing incoming CIs
  • Simultaneously start performance testing and monitoring system resources on all tiers
  • Start onboarding or updating of CIs, or both
  • Measure single-user timings with back-end load
  • Record the start and end times of steady-state period
  • Visually monitor the performance test for issues
  • When the test ends, stop system resource monitoring
  • Validate load generator report results with transaction counts from the AR System server
  • Save each tier's logs and results

Executing CMDB batch processing tests

  • Load configuration items and relationships (CiRs) into source dataset
  • Verify that all CiRs have been loaded
  • Optional: Back up the database
  • Start normalization batch job
  • Measure normalization throughput
  • Optional: Back up the database
  • Start reconciliation batch job that includes identification and merge
  • Measure reconciliation throughput

Analyzing results

  • Analyze results after each test run
  • Check for errors in each tier's logs
  • Verify that CPU use for any system is less than or equal to 75%
  • Verify that Java memory use does not exceed allocation
  • Verify that other memory use is stable during steady-state period
  • Use database and system I/O statistics to identify I/O bottlenecks
  • Check for network bottlenecks
  • Check load generator system
  • Check test scenario input
  • Use percentiles to determine steady-state response times
  • Use histograms to determine response times
  • Make one change at a time

Reporting results

  • Keep it simple
  • Summarize initial database volume
  • Include environment setup diagram
  • Summarize product scenario flows and expected throughput
  • Summarize percentage of users per scenario
  • Summarize user ramp-up
  • Show results based on goals
  • Use graphs and charts to clarify results
  • Summarize actual throughput
  • Add appendix of environment specifications and software and hardware configurations

Scoping project schedules

  • Estimate generous time for your benchmarking project
Was this page helpful? Yes No Submitting... Thank you

Comments