Test Run Dialog Details


Host Connection

Execution of the unit tests occurs on the mainframe, the Target Environment.

Select the host connection where you want to execute the Test Suite/Scenario from the list of Hosts in the Target Environment drop down list.
If the desired host connection isn't listed, click on the Configure button to define a new Host Communication Interface (HCI) connection. Refer to the Host Explorer Connections help topic in the BMC AMI DevX Workbench Host Explorer User Guide for detailed instructions on how to add a new connection.

JCL template

Select the JCL to use for the test execution from the Used JCL template view. The view lists all JCL files stored for the current project. See also Create JCL Template for more information.
Make sure your steplib DD points at your site's Enterprise Common Components (ECC) Release 17.02 or newer load library (CPWR.MLCXnnn.SLCXLOAD), for example hlq.CPWR.MLCX170.SLCXLOAD. For details please refer to the Enterprise Common Components Installation and Configuration Guide for Release 17.02 or newer. The variable ${TOTALTEST_JOBCARD} in the JCL will load the Total Test default jobcard specified in Window > Preferences > BMC > Total Test.

Use JCL based on the Runner.jcl for tests that do not use live Db2 or live IMS BMP.

  • If your test case includes live DB2, use JCL based on the RunnerDB2.jcl template.
  • If your test case includes live IMS BMP, use JCL based on the RunnerBmp.jcl template.
  • If your test case includes both live DB2 and live IMS BMP, use JCL based on the RunnerBmpDB2.jcl templat.

Important

If the Logging level in Total Test Preferences is set to either DEBUG, TRACE, or ALL, then DEBUG(ON) is set automatically in the Runner JCL that is submitted for execution.

Execution Options

The defaults for the Execution Options have been set in Project-Properties.

Use Stubs

With the aid of test stubs, instead of actually calling a program, a simulation is performed at runtime, acting as if the called program processed the call and returns a pre-defined set of return data. This options controls if stubs defined in a test scenario should be used or not.

Check this option to use (activate) all stubs defined in a test scenario during the test which means that the stubbed interfaces will be simulated as defined.

Uncheck this option to specify that stubbing will not be used during the test run, even if the scenario contains stubs. This enables you to switch off stubbing without having to remove all stubs from the test scenarios.

Stubbing is an optional feature. Refer to Stubs for more information on stubs.

Delete Temporary Files

Some temporary files are created during the execution of a test on the client and on the target platform. These files are needed to collect the scenario specification into an archive and to transfer the input and output data between client and target platform. Usually, these temporary files are not needed any more after the execution has completed. However, if you want to re-submit the test job for use with a debugger, the temporary files are required to resume the test execution. In this case, switch this option to off to maintain the temporary files after execution.

Check this option to specify that all temporary files will be deleted once the execution has been completed.

Uncheck this option to specify that all temporary files will be kept.

Test Aids

The defaults for the Test Aids have been set in Project-Properties.

Initialization with byte value

The initialization of empty attributes with specific values is a test aid that helps to identify problems caused by the use of uninitialized data by the test target. This may apply to the use of data not passed in a call as well as missing return values. Such problems are often difficult to identify because during testing, there may be no difference between uninitialized and initialized data. Therefore, problems usually occur later, maybe even in production, and are usually not easy to reproduce. With the help of this test aid, uninitialized data can be distinguished from initialized data.

In order to activate this option, in Initialization of empty fields enter a string to be used for the initialization of all fields with no defined value. The string must be in hexadecimal representation. Choose a value that is unique so it differs significantly from common initialization values. A good and proven example is x'EE' as it largely improbable to appear within real data.

Check this option to specify to use this option.

Uncheck this option to specify that all fields without defined values will be initialized with a value of zero for numeric data types, and with a value of spaces for character data types. Note that this "clean" initialization renders it impossible to identify initialization problems.

Enable Code Coverage

Code Coverage must be installed at your site for this feature. Refer to the BMC-AMI-DevX-Workbench-Code-Coverage-Eclipse-User-Guide.

Check this option to enable Code Coverage for this unit test execution.

If Code Coverage fails and indicates "No data matched the specified criteria", the following DD may have to be added to your Runner.jcl:

XPSL0004 DD DISP=SHR,DSN=DSN.OF.DDIO.WITH.PROG.LISTING

Uncheck this option to NOT enable Code Coverage for this unit test execution. If unchecked, the following Code Coverage items are greyed out.

Repository DSN

Specify the dataset where Code Debug will write coverage information. This dataset will later be fed into the Code Coverage reporting system to report on program invocations that occurred during the test session. Members with like systems, names, and compile date and times will automatically be merged together. If the desired repository dataset isn't listed in the drop down list, click on the Browse button to select a Code Coverage repository data set.

System name

Specify or select a system name for this test. The System name drop down contains a list of recently used system names. Code Coverage keeps statistics by programs separated by system name. This name can be any user-specified name. If the desired system name isn't listed in the drop down list, click on the Browse button to select one from the available system names in the repository.

Test ID

Specify or select a test identification to be added to the Code Coverage test. This information can be useful when reviewing test reports from Code Coverage. If the Test ID isn't listed in the drop down list, click on the Browse button to select one from the available Test IDs in the repository.

Main Program Type

Select the main executable program (this is the program specified on the 'EXEC PGM=' JCL statement in runner*.JCL) for Code Coverage:

Live DB2 - IKJEFT01 when the main program is IKJEFT01 or IKJEFT1B for live Db2 processing

Live IMS - DFSRRC00 when the main program is DFSRRC00 for live IMS processing

TOTALTEST - TTTRUNNR when the main program is TTTRUNNR.

Clear existing statistics before running the test

The "Clear existing statistics before running the test" checkbox indicates if you want the repository statistics cleared before the test is run.

Check this option to clear the repository statistics cleared before the test is run.

Uncheck this option to keep any existing repository statistics.

Display a report after running the test

The "Display a report after running the test" will cause the Code Coverage report to be displayed after the test is run.

Check this option to display the Code Coverage report after the test is run.

    • After the test run is complete, a prompt will display asking if you want to switch to the Code Coverage perspectiv
    • Select Yes to display the Code Coverage report.

Uncheck this option and the Code Coverage report will not be displayed.

OK

Click OK to execute the test case.

The progress bar shows the test assets moving to the Host that will execute the Test Suite/Scenario.

A notification in the Console informs that the Test Suite/Scenario has completed and the Result-Report opens in the Editor area.

Unsuccessful Execution

If the execution doesn't complete successfully, check the JES log for abend, error and failure messages. Make the appropriate corrections and execute the test again. For example, allocation failures can be corrected with the Override Default Allocation Sizes settings in Project-Properties.

CANCEL

Click Cancel to cancel the test run request and close the Test Run dialog.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*