Default language.

Space announcement This documentation space provides the same content as before, but the organization of the content has changed. The content is now organized based on logical branches instead of legacy book titles. We hope that the new structure will help you quickly find the content that you need.
Space announcement To view the latest version of the product documentation, select the version from the Product version menu above the navigation pane.

Code Pipeline System Description


This section describes the general underlying structure of the Code Pipeline system, including its components and the data it generates.

Topics Covered

  • Code Pipeline system structure
  • How does Code Pipeline relate to your working environment?
  • How are the various Code Pipeline components organized?

Prerequisite Knowledge

Since this section looks at the general structure that’s behind the scenes, it is assumed that you have already read the Code Pipeline User Guide or are familiar with the basic functionality of Code Pipeline and its overall structure. This chapter is important to understanding the rest of the sections in this reference guide.

Code Pipeline System Structure

A Code Pipeline Environment consists of an unlimited number of client processes communicating with a single server process. It is composed of the following main types of components:

  • ISPF/TSO Client
  • Started Tasks
  • Db2-based Metadata Repository
  • ISPF Data Tables
  • Connection Definition
  • Code
  • Repository Datasets
  • External Call Interface (ECI)
  • Web Services.

The parameters kept in the metadata repository and data tables control Code Pipeline processing. Some of these parameters are set up at installation time. Other parameters are filled in as Administrators and Users work with Code Pipeline.

ISPF/TSO Client

Code Pipeline utilizes ISPF as its dialog manager. An ISPF dialog consists of REXX, CLISTs, Panels, Messages, Skeletons, and Programs.

Started Tasks

Code Pipeline utilizes started tasks to provide:

  • Secure server which controls all Code Pipeline processing. This is connected to the Db2-based metadata repository,
  • Secure means of communication across z/OS LPARs to a single server, and
  • Controlled and secure component processing.

Started Task Details

CM

  • For each Code Pipeline system (that is, separate Db2 Repository), a single CM task is required.
  • The CM task controls Code Pipeline security and interfaces to the site SAF product (for example, RACF, ACF2, Top Secret, etc.). The started task is a multi-user address space. With Top Secret, multi-user address spaces should be assigned to a FACILITY. Define a FACILITY for Code Pipeline.
  • The CM task is the only task that communicates directly with the Db2-based repository.
  • CM will issue z/OS START commands to initiate SX Tasks as work is required to be performed.
  • The STEPLIB library must be z/OS APF-authorized.
  • CM uses Language Environment time functions, therefore when there is a time change for Daylight Saving or other reasons, the CM must be restarted.

CI

  • The CI task acts as an intermediary between Code Pipeline Clients (for example, TSO users, SX, etc.) and the CM task. Multiple CI tasks can be configured to communicate to a single CM.
  • The CI task requires no allocations or access to any datasets other than its runtime library.
  • CI tasks must define a z/OS subsystem ID for its use of cross-memory services. The default value for the name of this subsystem is “ISPW”, but a different value may be used in the event of conflict with the name of an existing started task or subsystem. Contact BMC Support if required.

  • One CI task is required for each z/OS system image on which there are Code Pipeline TSO users (or SX Tasks). Code Pipeline can be implemented with users across different z/OS LPARs.
  • The CI task must run from a z/OS APF-authorized library. (Authorization is required to use the z/OS cross-memory services, which are used for communication with the Code Pipeline clients.)

CT

  • There will typically be one Component Transport task running on the same system as CM. It will run continuously and communicate to CM via a TCP/IP connection. It may be started before or after the CM task.
  • If Code Pipeline Deploy is used for deployment of z/OS objects, other CT Tasks may be defined on other z/OS LPARs.
  • The CT task will manage access to the Component Warehouse. It can copy components into and out of the warehouse datasets. It will dynamically create additional warehouse datasets as needed.
  • There can be multiple warehouses in a CT task, and there can be multiple CT tasks.
  • The CT task must run from a z/OS APF-authorized library. (Authorization is required to use the SAF interface. It also will require authorization if the DFHSM interface is enabled).

SX

  • The SX Task is the Set Execution Processor and is started by the CM task for Set Processing.
  • Once started, SX operates as a TSO Code Pipeline client, communicating with CM through CI.
  • SX uses the RTCONF parameter on the START command to select the Runtime Configuration that allocates the appropriate datasets and also specifies the appropriate Cross Memory ID to use to communicate with CI.
  • The SX ProcName is specified to CM using a PARMLIB input statement.
  • SX moves components between Application libraries and needs appropriate authority to do this within your security product.
  • Because SX submits controlled compile jobs under its authority, it requires TSO SUBMIT authority. For the same reason, SX requires JCL authority under the TSOAUTH resource class.

FX

  • The FX Task is the Custom Exit Processor and is started by the CM task for Custom Exit processing.
  • Once started, FX operates as a TSO Code Pipeline client, communicating with CM through CI.
  • The FX procedure name is specified to CM using a PARMLIB input statement.
  • FX moves components between Application libraries and needs appropriate authority to do this within your security product.
  • The FX requires JCL authority under the TSOAUTH resource class. It will require the same authority as the SX Processor.

EF

  • The EF Task is the Code Pipeline External Function Processor and is started by the CT task to handle parsing of source (Parse on Save Feature). Additional functionality will be added in the future.
  • EF communicates with CT to receive the source to parse, and with CM through CI to store the parse results. EF cannot be used to parse source if the Code Pipeline DNA Facility is in use.
  • The EF procedure name is specified to CT using a PARMLIB input statement.

Db2-based Metadata Repository

Code Pipeline stores the majority of its reference data and component information in Db2. The Server started task is the only task that connects to this repository.

ISPF Data Tables

Code Pipeline uses two ISPF table libraries defined as follows:

  • M – Tables library contains models and some reference data. Only Code Pipeline support people require update access to these tables. All other Users require read only access.
  • O – Tables library contains information which must be updated as Users work in Code Pipeline. Everyone who works in Code Pipeline requires update access to these tables.

Code Repository Datasets

Code Pipeline stores historical versions of components in its own repository datasets. An unlimited number of component versions can be stored.

Connection Definition

A connection definition in the form of a Runtime Config entry is used to specify the parameters that a Code Pipeline Client needs to connect to the Code Pipeline Server.

External Call Interface

Code Pipeline has an External Call Interface (ECI) that can be used to perform a limited set of Code Pipeline functions via batch processes.

Web Services

Code Pipeline has a J2EE 1.2 standard interface to enable various functions to be done via a browser (for example, Approvals).

Code Pipeline Structure Diagram

The following figure shows the overall structure.

Code Pipeline Structure

ISPW-Structure_Rebranded.png

Multiple Code Pipeline Environments

Having More Than One Code Pipeline

The data repository used to manage Code Pipeline is determined by the option chosen from your ISPF/PDF menu. You could have several sets of data repositories if there is a need to have independent operating environments. For example, every site might have one system for production use where all users would normally work. Other environments could be used for staff training or testing new Code Pipeline versions.

The following figure represents multiple Code Pipeline environments.

Multiple Code Pipeline Environments

Multiple ISPW Environments.png

Dataset Allocations for Code Pipeline Clients

The TSO/ISPF Client consists of site-specific and Code Pipeline base datasets. The site-specific datasets should contain any Code Pipeline component that is modified and any new components created by your site.

The ISPF Skeleton WZU@JOB is an example of a Code Pipeline component that must be modified. Compile Skeletons and special technology CLISTs are components created by each site. Though few Code Pipeline components require modification, they must be copied and modifications made to them in your site-specific datasets so that changes to the Code Pipeline base system can be easily identified.

Allocations Definitions Macro

The Allocations Definition Macro contains different dataset allocations. All Code Pipeline clients now use the load module resulting from the assembly and link of this macro for their dataset allocations. This means that any Set or Deploy process (and even Code Pipeline step in a submitted job) will honor the allocations that the Code Pipeline client was started with.

Concatenation of Datasets

All the site-specific datasets are allocated in front of the Code Pipeline base datasets. The dialog box can then be split into two adjoining pieces, as shown in the following figure.

Concatenation of Datasets

Concatenation_Datasets.png

Site-Specific Datasets

Code Pipeline is used to maintain the site-specific customization, so a development life cycle is set up for the site modifications. The following table shows the simplest example of this life cycle consists of TEST, HOLD, and PROD datasets, which can be concatenated in front of the base datasets for the reasons listed in the following table.

Site-Specific Datasets

Concatenation

Reason

PROD SITE
CODE PIPELINE BASE

Day-to-day production use of Code Pipeline.

HOLD SITE
PROD SITE
CODE PIPELINE BASE

Acceptance testing of a new Code Pipeline feature. The updated modules would exist in the HOLD datasets. Users doing acceptance testing would change their allocations so that they may work with the updates.

TEST SITE
HOLD SITE
PROD SITE
CODE PIPELINE BASE

Testing of a new feature or release by the Code Pipeline support person. The updated modules would exist in the TEST datasets so that only Code Pipeline support would be affected by the change.

Updating the Code Pipeline Base Datasets

Each site should have its own procedures for maintaining system software, and the Code Pipeline base datasets should be treated as such.

Invocation of Code Pipeline

Code Pipeline is invoked as an option on the ISPF menu. As part of the call to Code Pipeline, a parameter is passed that determines the allocations definition to be used. Different Code Pipeline options can be used to control which allocations definition is passed. The correct connection definition should be allocated for the correct Code Pipeline environment.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*