Overview of Job Optimizer Pipes implementation

Pipes provide a way to transfer data between applications without using external storage, allowing applications to execute in parallel.

Data is usually transferred between applications by using sequential data sets. By using this method, an application that reads the data cannot begin until the application that writes the data has finished execution. Pipes can be used whenever sequential data sets are used. By using pipes, applications can run in parallel and each data block can be read immediately after it is written.

An application that accesses a pipe is called a pipe participant. A pipe participant that writes to a pipe is called a writer ; a pipe participant that reads from a pipe is called a reader. An application can read from more than one pipe and can write to more than one pipe at the same time.

The following types of pipes are supported:

  • Job-to-job pipes

    These pipes are established between batch jobs that are running in parallel by using Pipe Rules or the SUBSYS JCL parameter.

  • Step-to-step pipes

    These pipes are established by Job Optimizer between steps that are running in parallel. You cannot control these pipes.

  • Pipes that are compatible with IBM BatchPipes

    These pipes are established between batch jobs that are running in parallel by using BatchPipes subsystem parameters.

More than one type of pipe can be implemented for the same job. For example, the first step of the job might share a job-to-job pipe with another job. The second and third steps of that job might run in parallel by Job Optimizer and share a step-to-step pipe.

Note

Implementation considerations in this section are relevant to job-to-job pipes only.

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Comments