Parallel processing
Parallel Processing is the next logical step beyond restartability. Now that application systems have been made fully restartable, the production schedule is not unnecessarily delayed due to production problems. Unfortunately, the batch window is still shrinking due to sheer volume of both jobs and information to be processed.
QUICKSTART Parallel Processing enables multiple applications or multiple copies of a single application to simultaneously execute in a single job step. The purpose of this is to reduce the total run time of a job by breaking the work to be done into smaller pieces and exploiting multi-processor CPUs by running in a multiple TCB structure. Each program to be run can use the same or different DB2 plan names if DB2 is active. If one copy of an application ABENDs, the other copies can continue to run to completion. Later, the failing copy can be restarted. During restart processing, only failed copies of the application will be restarted, making restart JCL as simple as normal QUICKSTART procedures.
Parallel Processing supports DB2 COBOL II applications using QUICKSTART API Mode, and does not checkpoint Working Storage in any subprograms.
Facilities are provided to specify the application(s) to be run, the ReAttach features desired (described later), and the sequential files to use for each subtask in the job step. When running applications in parallel, QUICKSTART keeps track of which pieces finish successfully and which pieces fail so that only ABENDed pieces are restarted.
If more than one copy of a single application is executed in a Parallel Processing environment, and the application uses sequential files, each copy is issuing I/O requests to the same DDName. QUICKSTART manages this by intercepting this I/O and rerouting it to task number specific DDNames. This ensures correct repositioning of the files should a restart be necessary. Parallel Processing allows one disk input driver file to be shared amongst the tasks. It does this by internally splitting the file into number-of-tasks pieces and giving each subtask one piece. input driver files are split at the track boundary and are not read to achieve the splitting. If the number of tracks on the input file is less than the number of splits, only one task is created.
This section contains the following topics:
Related topic