Testing Scenarios (MQ)


This section provides comprehensive testing scenarios to teach you how to use Performance Test for WebSphere MQ.

These scenarios are based on testing a Customer Order and Credit Inquiry application that is detailed here. These are progressive examples. 

To help you conceptualize these scenarios, the following samples have been generated from the example application:

  • Repositories containing activity captured with Global Recording. Find these repositories in your system-level repositories datasets.
  • Scripts generated from those repositories. Some of the scripts have been modified to support the given testing scenario. Locate the script files in the script library (SQQFSCRP).
  • Logs files generated during script creation. Some of the log files have been modified to support the given testing scenario. Find the log files in your system-level playback control library.
  • External Files generated from playing back modified scripts. These files are used as input for other testing scenarios. Find these files in the Performance Test samples library (SQQFSAMP).
  • A script analysis control member, discussed in Unattended Playback. Find the ANALYZE member in your system-level playback control library.
  • Report control and report members, discussed in Reporting-MQ. Find these assets in your system-level report control library.

Each scenario provides a list of supporting sample files. See Performance Test for WebSphere MQ Samples Index for a list of all sample files ordered alphabetically.

Use the samples, along with the appropriate instructional chapters, to replicate the tasks involved in each scenario. For example, using a sample capture repository in Scripts and Subset Repositories, create scripts with the filtering specifications from a given scenario. Verify your work by comparing the scripts you create to the sample scripts, as described in Analyzing and Comparing Scripts.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*