Default language.

Process Flow - Topaz for Total Test - COBOL Batch Programs


Important

  1. If your test program executes in a CICS environment, see 09.
  2. If your COBOL Batch Program invokes a Db2 Stored Procedure (SP), Xpediter requires a special launch configuration. See Xpediter DB2 Stored Procedure (SP) Launch Configurations in the Topaz-Workbench-Xpediter-Eclipse-User-Guide for instructions.
  3. To support long names for target program names and entry points, the Xpediter launch configuration requires a scripting member containing a line with SET ABBREV ON. See Xpediter Batch Launch Configurations in the Topaz-Workbench-Xpediter-Eclipse-User-Guide for instructions.

We recommend that you compile your program and provide a current source listing (see also Compile Programs) before you generate a test.

Starting the Record Test Case dialog

You can start the recording of a test case in 2 ways:

Test Cases from an Xpediter/Eclipse Debugging Session

When developers need to test any COBOL program behavior changes, they would initiate an Xpediter debug session to generate a test and validate their change.

  1. Restart debugging the program in Xpediter which stops at the main program entry.
  2. Find the sub-program for which to collect test data.
  3. Right-click and select Record Test Case… on the Procedure section of the program on which to collect data. Refer to Context-Menus-from-Xpediter-Debug-Session for more details.
  4. In Record Test Case dialog window, select the program to be recorded and select an existing project or create a new project.
    Record Dialog - From Xpediter context menu.png
  5. To record, select either Record the program specified in launch configuration or Record a called program option. If Record a called program option is selected, enter the Program name, Load module, and Maximum program executions to record field.
  6. Select the project from the Project drop-down or create a new one using the NEW button.
  7. Select a test folder from the Folder drop-down or enter a new folder name.
  8. In the Scenario drop-down, the last scenario specified for this program is pre-selected. Otherwise, a new scenario is displayed, using the name of the program with the suffix _Scenario (<program>_Scenario), for example, QSAMDYN_Scenario.
  9. Optionally, you may specify the project's test suite into which the generated test scenario will be copied.
  10. Select which test to generate, either Unit Test (Virtualized Test, default), Functional Test (Non-virtualized test), or both.
  11. If you selected Functional Test, select the Host connection from the Functional Test Environment drop-down list. Use the Configure button to add or modify a connection.
    New_dialog.PNG
  12. Open the Unit Test Options tab to review and manage the Unit Test specific options.
    All options changed in the Unit Test Options tab will be persisted and next time the dialog opens it remembers settings from the last recording. This is also true after Topaz has been restarted.
  13. By default, all Record stubs options are selected. Only uncheck the checkboxes if stubs aren't wanted. For more information on stubs refer to Stubs.
    1. Record sub-program stubs. Uncheck if sub-program stubs should NOT be created.
      1. Check (default) Create assert condition for data passed into sub-programs for

        Topaz for Total Test

        to generate sub-program assertions for stubbed out sub-programs. Sub-program assertions validate all of the data that is being passed into a sub-program call just like a Write Assertion validates all the data that is passed into a Write I/O call.
        Uncheck if assert conditions should NOT be created.
        NOTE: You cannot define a Program Entry Assertion for a live sub-program.
      2. Update the Maximum number of calls to capture per program. Initial default is 1000.
        NOTE: Record sub-program stubs are not available for IMS live program type.
    2. Record I/O stubs. Uncheck only if no I/O stubs should be created.

      Check or uncheck the desired data types checkboxes. Update the Maximum number of I/O records to capture per file field. Initial default is 1000. 

      For description of the context menu item, seer to Record IO stubs.

    3. Record DB2 stubs. Uncheck if Db2 Statement stubs should NOT be created.

      Update the Maximum number of SQL calls to capture per statement or cursor. Initial default is 1000.

    4. Record IMS stubs. Uncheck if IMS DL/I call stubs should NOT be created.

      Update the Maximum number of IMS calls to capture per PCB. Initial default is 1000.

  14. Check the Overwrite existing stubs checkbox if you want to replace the contents of previously created, same named, stubs with the data to be collected from this unit test data collection.

    If left unchecked,

    Topaz for Total Test

    will create a copy of the same named stubs, and identify each new set of Data Stubs with a 2 digit set identifier, for example:

    Existing stub: CWXTCOB_EMPFILE_READ.stub

    New stub: CWXTCOB_EMPFILE_01_READ.stub

    Important

    If you only want to create program stubs, right-click and select Record Program Stub... on the Procedure section of the program on which to collect data. Refer to Record Program stubs. For more information
    Record Program Stub is not available for IMS program type.

  15. Click OK to create the test case.
  16. If so desired, use the Xpediter session to change variable values at this point.
  17. Issue the appropriate Resume Run commands in your Xpediter debug session. Xpediter collects and saves the program entry data from the structure specified by the Using Statements.
  18. Xpediter continues executing the rest of the program. For a Data Stub to be created, at least one Read or one Write must be executed in Xpediter during data collection. When the program ends or is stopped, Xpediter transfers the collected data to

    Topaz for Total Test

    .
  19. Once the Test Case data has been imported, the new Test Case(s) appear in the Test Project previously specified. The Non-virtualized Test scenarios are only created if the

    Topaz for Total Test

    Repository exists for your installation and its URL has been set in Window > Preferences > Compuware > Topaz for Total Test > Repository Server. Select the new Test Case(s) in the Test Project Explorer View.

    Topaz for Total Test

    populates the following project folders with the transferred data:
    • Interfaces: program interface names using the program name.
    • Scenarios:
      • Non-virtualized Test test case (<program>.scenario)

        This is the recorded Non-virtualized Test that you can execute using the Execution Context Dialog.

      • Virtualized Test test case (<program>_Scenaritestscenario)

        This is the recorded Virtualized Test that you can execute using the Execution Context Dialog or the classic Virtualized Test Run Test Dialog.

      • Context file (<scenario_name>.context)

        This is the context file for either test scenario file that you can execute using the Execution Context Dialog.

    • Structures: COBOL layouts identified by their level 01 name.
    • Stubs:
      • Program,
      • QSAM Read, QSAM Write
      • VSAM KSDS Read, VSAM KSDS Write
      • VSAM ESDS Read, VSAM ESDS Write
      • VSAM RRDS Read, VSAM RRDS Writ
      • SQL, SQL CURSOR
    • Suites: test case

Capture specific sets of data from a program

There may be instances when you want to capture specific sets of data from a program when it is called for the "n" time or for a specific value.

This use case can be supported by using the Xpediter Feature: INTERCEPT to stop at the n iteration of loop or the WHEN command to stop when a specific value is assigned to a variable. To use this:

  • Set up the Intercept / WHEN command using the Xpediter/Eclipse interface after the Debug session has initially started.
  • Set these commands on a line prior to Call of the Program. Once Xpediter has stopped you can then set the Program for Test Collection Data collection.
  • This approach allows collecting a specific set of data even though it may take many call iterations to get to the set of data you want to collect for the test.

If needed, add some more sophisticated test assertions, for example:

  1. Open the Test Case in the Scenario folder with a double click and select the Check Conditions tab in the Editor window.
  2. Find the variable you want to base a range test on.
  3. Change the operator to less than and enter the high boundary value in the assertion.
  4. Add another assertion for the variable, change the operator to greater than, and enter the low boundary value in the assertion.
  5. Save your changes.

Run a Test Scenario/Test Suite

Starting with Release 20.04.01,

Topaz for Total Test

uses the Execution Context Dialog to execute both Virtualized Test scenarios and Non-virtualized test scenarios. The following procedure executes a Unit Test *.testscenario.

Important

The classic Run Test Dialog for Unit Test scenarios has been deprecated. You must select the Use the classic Test Run dialog deprecated when running unit tests option in the Preferences if you still want to use the classic Run Test Dialog.


  1. Select the specific Test Suite/Scenario you want to run.
  2. Right-click on the selected Test Suite/Scenario and select Topaz for Total Test > Run Unit Test Scenario...,
    or simply click on the Run test scenario button in the upper right area of the Scenario editor, or double-click the corresponding .context file.

    The Execution Context Dialog box opens.

    Unit_Test_Execute.PNG

  3. Select the host where you want to execute the Test Suite/Scenario from the list of Hosts in the Target Environment drop down list. Use the Configure button to create or edit HCI connectios.
  4. Select the desired Logging level. Only select ALL in special circumstances as it generates a lot of data.
  5. Open the Unit Test tab.
  6. Select the Execution JCL location

    Topaz for Total Test

    gives you a choice which Execution JCL to use for this test execution:

    Use the JCL skeleton configured for the selected host

    Select this option to use the dynamic JCL skeleton defined for the selected Environment Connection as defined in the Repository or the TotalTestConfiguration project when Use Repository server is not selected.

    Use JCL defined in the project

    (Default when Select the JCL to use for the test execution from the JCL file list. The list includes all JCL files stored for the current Unit Test project. See also Create JCL Template for more information.

    Make sure your steplib DD points at your site's Enterprise Common Components (ECC) Release 17.02 or newer load library (CPWR.MLCXnnn.SLCXLOAD), for example hlq.CPWR.MLCX170.SLCXLOAD. For details please refer to the Enterprise Common Components Installation and Configuration Guide for Release 17.02 or newer. The variable ${TOTALTEST_JOBCARD} in the JCL will load the

    Topaz for Total Test

    default jobcard specified in Window > Preferences > Compuware > Topaz for Total Test.

    Use JCL based on the Runner.jcl for tests that do not use live Db2 or live IMS BMP.

    If your test case includes live DB2, use JCL based on the RunnerDB2.jcl template.

    If your test case includes live IMS BMP, use JCL based on the RunnerBmp.jcl template.

    If your test case includes both live DB2 and live IMS BMP, use JCL based on the RunnerBmpDB2.jcl template.

    Important

    When using the dynamic JCL skeleton, then DEBUG(ON) is set automatically when the Logging level is set to DEBUG, TRACE or ALL. When using hardcoded JCL, it will use what you coded in that. That can be changed if you replace the DEBUG(ON) or DEBUG(OFF) in their JCL with DEBUG(${runtime.usedebug}).

  7. Specify the TTTRunner load libraries for Virtualized Test elements. If the test cases were generated from an Xpediter debug session (starting with Release 20.04.01), this field is prefilled with the load libraries collected during test case generation.

    Important

    The Execution Options in the Project Properties (Use Stubs, Delete Temporary Files, and Test Aids) only apply when using the classic Test Run dialog. If you don't want to use stubs, remove selected or all stubs from Selected column in the Stubs tab in the test scenario editor (see Stubs). Logging level ALL will not delete temporary files.

  8. Review the remaining tabs (Options, Code Coverage, Context Variable) in case you want to use any of their options. For details on these options, please refer to Execution Context Dialog.
  9. Click OK to execute the test case.
  10. The progress bar shows the test assets moving to the Host that will execute the Test Suite/Scenario.
  11. A notification in the Console informs that the Test Suite/Scenario has completed and the Result Report opens in the Editor area.

Review Test Results

  1. After a test run the Result Report opens in the Editor area. Otherwise, double-click the desired Result Report (.archive) in the Output folder.
  2. When the selected Result Report opens, look for failed tests.
  3. Should some of the test cases have failed, follow the link in the Result Report to open the Test Details Report.
  4. In the Test Details Report identify the specific test cases that failed and the specific test assertions that failed. Since this test had just been created and it was successful during the test data collection, an assertion test had probably been changed incorrectly.
  5. Change the assertion test.
  6. Rerun the test until all of the test cases complete successfully.

Summary Unit Test Creation Scenarios

Unit tests should be created to test any program behavior changes. There are three scenarios (existing test case, create a test case PRIOR to changes and create a test case AFTER modifications.

  1. Unit test case already exists PRIOR to any modifications to the COBOL program.
    1. Check the unit test case assertions with the existing values.
    2. Run the unit test case to get a baseline for assertions.
    3. Make changes to the COBOL program, then rerun the unit test case created above.
    4. Validate that the changes and assertions for the new values are correct and transfer the new values to the test case.
    5. Rerun the unit test case and validate the assertions
  2. Create a unit test case PRIOR to any modifications to the COBOL program.
    1. Check the unit test case assertions with the existing values.
    2. Run the unit test case to get a baseline for assertions.
    3. Make changes to the COBOL program, then rerun the unit test case created above.
    4. Validate that the changes and assertions for the new values are correct and transfer the new values to the test case.
    5. Rerun the unit test case and validate the assertions.
  3. Create a unit test case AFTER modifications to the COBOL program
    1. Check that the unit test case assertion values.
    2. Run the unit test case and validate the assertions for the unit test case are correct.

There should be no reason to have to create the unit test case after the code modification unless it does not exist prior to the code change (#3 above).

Related Topics

COBOL Memory Initialization

Limitations

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*