Process Flow - Topaz for Total Test - COBOL Batch Programs
We recommend that you compile your program and provide a current source listing (see also Compile Programs) before you generate a test.
Starting the Record Test Case dialog
You can start the recording of a test case in 2 ways:
- from the Topaz for Total Test perspective
This option allows you to start the dialog without having to switch to perspectives and is described here. - from an Xpediter/Eclipse Debugging Session
This option allows developers to record a test case while in an Xpediter debug session as described in this topic.
Test Cases from an Xpediter/Eclipse Debugging Session
When developers need to test any COBOL program behavior changes, they would initiate an Xpediter debug session to generate a test and validate their change.
- Restart debugging the program in Xpediter which stops at the main program entry.
- Find the sub-program for which to collect test data.
- Right-click and select Record Test Case… on the Procedure section of the program on which to collect data. Refer to Context-Menus-from-Xpediter-Debug-Session for more details.
- In Record Test Case dialog window, select the program to be recorded and select an existing project or create a new project.
- To record, select either Record the program specified in launch configuration or Record a called program option. If Record a called program option is selected, enter the Program name, Load module, and Maximum program executions to record field.
- Select the project from the Project drop-down or create a new one using the NEW button.
- Select a test folder from the Folder drop-down or enter a new folder name.
- In the Scenario drop-down, the last scenario specified for this program is pre-selected. Otherwise, a new scenario is displayed, using the name of the program with the suffix _Scenario (<program>_Scenario), for example, QSAMDYN_Scenario.
- Optionally, you may specify the project's test suite into which the generated test scenario will be copied.
- Select which test to generate, either Unit Test (Virtualized Test, default), Functional Test (Non-virtualized test), or both.
- If you selected Functional Test, select the Host connection from the Functional Test Environment drop-down list. Use the Configure button to add or modify a connection.
- Open the Unit Test Options tab to review and manage the Unit Test specific options.
All options changed in the Unit Test Options tab will be persisted and next time the dialog opens it remembers settings from the last recording. This is also true after Topaz has been restarted. - By default, all Record stubs options are selected. Only uncheck the checkboxes if stubs aren't wanted. For more information on stubs refer to Stubs.
- Record sub-program stubs. Uncheck if sub-program stubs should NOT be created.
- Check (default) Create assert condition for data passed into sub-programs for
Topaz for Total Test
to generate sub-program assertions for stubbed out sub-programs. Sub-program assertions validate all of the data that is being passed into a sub-program call just like a Write Assertion validates all the data that is passed into a Write I/O call.
Uncheck if assert conditions should NOT be created.
NOTE: You cannot define a Program Entry Assertion for a live sub-program. - Update the Maximum number of calls to capture per program. Initial default is 1000.
NOTE: Record sub-program stubs are not available for IMS live program type.
- Check (default) Create assert condition for data passed into sub-programs for
Record I/O stubs. Uncheck only if no I/O stubs should be created.
Check or uncheck the desired data types checkboxes. Update the Maximum number of I/O records to capture per file field. Initial default is 1000.
For description of the context menu item, seer to Record IO stubs.
Record DB2 stubs. Uncheck if Db2 Statement stubs should NOT be created.
Update the Maximum number of SQL calls to capture per statement or cursor. Initial default is 1000.
Record IMS stubs. Uncheck if IMS DL/I call stubs should NOT be created.
Update the Maximum number of IMS calls to capture per PCB. Initial default is 1000.
- Record sub-program stubs. Uncheck if sub-program stubs should NOT be created.
Check the Overwrite existing stubs checkbox if you want to replace the contents of previously created, same named, stubs with the data to be collected from this unit test data collection.
If left unchecked,
Topaz for Total Test
will create a copy of the same named stubs, and identify each new set of Data Stubs with a 2 digit set identifier, for example:Existing stub: CWXTCOB_EMPFILE_READ.stub
New stub: CWXTCOB_EMPFILE_01_READ.stub
- Click OK to create the test case.
- If so desired, use the Xpediter session to change variable values at this point.
- Issue the appropriate Resume Run commands in your Xpediter debug session. Xpediter collects and saves the program entry data from the structure specified by the Using Statements.
- Xpediter continues executing the rest of the program. For a Data Stub to be created, at least one Read or one Write must be executed in Xpediter during data collection. When the program ends or is stopped, Xpediter transfers the collected data to
Topaz for Total Test
. - Once the Test Case data has been imported, the new Test Case(s) appear in the Test Project previously specified. The Non-virtualized Test scenarios are only created if the
Topaz for Total Test
Repository exists for your installation and its URL has been set in Window > Preferences > Compuware > Topaz for Total Test > Repository Server. Select the new Test Case(s) in the Test Project Explorer View.Topaz for Total Test
populates the following project folders with the transferred data:- Interfaces: program interface names using the program name.
- Scenarios:
Non-virtualized Test test case (<program>.scenario)
This is the recorded Non-virtualized Test that you can execute using the Execution Context Dialog.
Virtualized Test test case (<program>_Scenaritestscenario)
This is the recorded Virtualized Test that you can execute using the Execution Context Dialog or the classic Virtualized Test Run Test Dialog.
Context file (<scenario_name>.context)
This is the context file for either test scenario file that you can execute using the Execution Context Dialog.
- Structures: COBOL layouts identified by their level 01 name.
- Stubs:
- Program,
- QSAM Read, QSAM Write
- VSAM KSDS Read, VSAM KSDS Write
- VSAM ESDS Read, VSAM ESDS Write
- VSAM RRDS Read, VSAM RRDS Writ
- SQL, SQL CURSOR
- Suites: test case
Capture specific sets of data from a program
There may be instances when you want to capture specific sets of data from a program when it is called for the "n" time or for a specific value.
This use case can be supported by using the Xpediter Feature: INTERCEPT to stop at the n iteration of loop or the WHEN command to stop when a specific value is assigned to a variable. To use this:
- Set up the Intercept / WHEN command using the Xpediter/Eclipse interface after the Debug session has initially started.
- Set these commands on a line prior to Call of the Program. Once Xpediter has stopped you can then set the Program for Test Collection Data collection.
- This approach allows collecting a specific set of data even though it may take many call iterations to get to the set of data you want to collect for the test.
If needed, add some more sophisticated test assertions, for example:
- Open the Test Case in the Scenario folder with a double click and select the Check Conditions tab in the Editor window.
- Find the variable you want to base a range test on.
- Change the operator to less than and enter the high boundary value in the assertion.
- Add another assertion for the variable, change the operator to greater than, and enter the low boundary value in the assertion.
- Save your changes.
Run a Test Scenario/Test Suite
Starting with Release 20.04.01,
Topaz for Total Test
uses the Execution Context Dialog to execute both Virtualized Test scenarios and Non-virtualized test scenarios. The following procedure executes a Unit Test *.testscenario.- Select the specific Test Suite/Scenario you want to run.
Right-click on the selected Test Suite/Scenario and select Topaz for Total Test > Run Unit Test Scenario...,
or simply click on the Run test scenario button in the upper right area of the Scenario editor, or double-click the corresponding .context file.
The Execution Context Dialog box opens.- Select the host where you want to execute the Test Suite/Scenario from the list of Hosts in the Target Environment drop down list. Use the Configure button to create or edit HCI connectios.
- Select the desired Logging level. Only select ALL in special circumstances as it generates a lot of data.
- Open the Unit Test tab.
Select the Execution JCL location
Topaz for Total Test
gives you a choice which Execution JCL to use for this test execution:Use the JCL skeleton configured for the selected host
Select this option to use the dynamic JCL skeleton defined for the selected Environment Connection as defined in the Repository or the TotalTestConfiguration project when Use Repository server is not selected.
Use JCL defined in the project
(Default when Select the JCL to use for the test execution from the JCL file list. The list includes all JCL files stored for the current Unit Test project. See also Create JCL Template for more information.
Make sure your steplib DD points at your site's Enterprise Common Components (ECC) Release 17.02 or newer load library (CPWR.MLCXnnn.SLCXLOAD), for example hlq.CPWR.MLCX170.SLCXLOAD. For details please refer to the Enterprise Common Components Installation and Configuration Guide for Release 17.02 or newer. The variable ${TOTALTEST_JOBCARD} in the JCL will load the
Topaz for Total Test
default jobcard specified in Window > Preferences > Compuware > Topaz for Total Test.Use JCL based on the Runner.jcl for tests that do not use live Db2 or live IMS BMP.
If your test case includes live DB2, use JCL based on the RunnerDB2.jcl template.
If your test case includes live IMS BMP, use JCL based on the RunnerBmp.jcl template.
If your test case includes both live DB2 and live IMS BMP, use JCL based on the RunnerBmpDB2.jcl template.
Specify the TTTRunner load libraries for Virtualized Test elements. If the test cases were generated from an Xpediter debug session (starting with Release 20.04.01), this field is prefilled with the load libraries collected during test case generation.
- Review the remaining tabs (Options, Code Coverage, Context Variable) in case you want to use any of their options. For details on these options, please refer to Execution Context Dialog.
- Click OK to execute the test case.
- The progress bar shows the test assets moving to the Host that will execute the Test Suite/Scenario.
- A notification in the Console informs that the Test Suite/Scenario has completed and the Result Report opens in the Editor area.
Review Test Results
- After a test run the Result Report opens in the Editor area. Otherwise, double-click the desired Result Report (.archive) in the Output folder.
- When the selected Result Report opens, look for failed tests.
- Should some of the test cases have failed, follow the link in the Result Report to open the Test Details Report.
- In the Test Details Report identify the specific test cases that failed and the specific test assertions that failed. Since this test had just been created and it was successful during the test data collection, an assertion test had probably been changed incorrectly.
- Change the assertion test.
- Rerun the test until all of the test cases complete successfully.
Summary Unit Test Creation Scenarios
Unit tests should be created to test any program behavior changes. There are three scenarios (existing test case, create a test case PRIOR to changes and create a test case AFTER modifications.
- Unit test case already exists PRIOR to any modifications to the COBOL program.
- Check the unit test case assertions with the existing values.
- Run the unit test case to get a baseline for assertions.
- Make changes to the COBOL program, then rerun the unit test case created above.
- Validate that the changes and assertions for the new values are correct and transfer the new values to the test case.
- Rerun the unit test case and validate the assertions
- Create a unit test case PRIOR to any modifications to the COBOL program.
- Check the unit test case assertions with the existing values.
- Run the unit test case to get a baseline for assertions.
- Make changes to the COBOL program, then rerun the unit test case created above.
- Validate that the changes and assertions for the new values are correct and transfer the new values to the test case.
- Rerun the unit test case and validate the assertions.
- Create a unit test case AFTER modifications to the COBOL program
- Check that the unit test case assertion values.
- Run the unit test case and validate the assertions for the unit test case are correct.
There should be no reason to have to create the unit test case after the code modification unless it does not exist prior to the code change (#3 above).
Related Topics