CICS Considerations


IAM and CICS

IAM is designed to provide a transparent high performance disk file manager alternative for VSAM KSDS, ESDS, AIX and RRDS files. IAM can eliminate VSAM performance bottlenecks and reduce VSAM file size (DASD footprint) by as much as 50%. This makes IAM an important performance enhancement option within an online transaction processing environment such as within CICS.

In a CICS environment, IAM improves transaction response time by reducing the time taken for access to VSAM files. Files, including alternative indexes and paths, that have been converted from VSAM to IAM may be used by a CICS TS region without any changes to the CICS startup JCL, to the RDO file definitions within CICS, to the CICS start up parameters, or to any of the IAM default global options.

To effect this improvement that IAM provides and to make sure optimal performance when using IAM, there are several areas within CICS that need to be taken in consideration when implementing IAM files as listed below.

The primary areas that are addressed in this section include:

  • IAM VSAM Interface (VIF) start order
  • IAM Global Option BELOWPOOL
  • CICS System Initialization Table CILOCK parameter
  • IAM Statistics and Performance Reports - IAMINFO
  • Effective use of IAM Override ACCESS Statements within CICS
  • IAM DEFERWRITE Override option
  • CICS and IAM Buffer Management and Storage Utilization
  • CICS and IAM use of VSAM Strings
  • CICS NSR and LSR specifications
  • CICS and IAM with the BUFND (or DATABUFFERS in RDO) setting
  • IAM index storage
  • IAM Dynamic Data Space
  • IAM and 64 Bit Above the Bar CACHE64 storage option
  • IAM with the CICS RDO FIRSTREF FILE Open option
  • CICS Concurrent VSAM sub-tasking option
  • CICS Open Transaction Environment (OTE) THREADSAFE/OPENAPI options
  • IAM with the CICS RDO FIRSTREF FILE Open option
  • IAM with the CICS RDO RLSACCESS Option
  • CICS Shutdown processing and IAM file closings

IAM VSAM INTERFACE (VIF)

The IAM VSAM Interface (VIF) for the level of IAM that CICS will use must be active when CICS is started. If a CICS region that accesses IAM files is started before the VIF, that region must be recycled before it will be able to process IAM files.

BELOWPOOL

The IAM Global Option Table BELOWPOOL option controls whether IAM will use (or not use) the below the line storage pools for I/O control blocks within a CICS or an IAM RLS address space. The default for this option is BELOWPOOL=YES. The use of the below the line storage pools substantially reduces IAM’s use of 24-bit addressable storage. If the default is changed or overridden to BELOWPOOL=NO then IAM will use a minimum of 4K below the line storage area for each open IAM file. Hence, within a CICS or an IAM RLS region that has many IAM files opened concurrently then leaving this option set to BELOWPOOL=YES will reduce the below the line storage requirements by at least 50% when several IAM files are opened.

CILOCK

The CICS System Initialization Table (SIT) parameter CILOCK defaults to NO. This setting is intended to reduce the number of VSAM exclusive control conflicts when reading VSAM files for update. It does so by using extra CPU cycles and doing extra I/O. IAM locks at the record level, not at the control interval level, thereby eliminating this delay except when concurrent transactions are attempting to update the same records. Therefore, the CILOCK parameter should be set to YES when most or all of the VSAM files have been converted to IAM.

If CICS is running with the default CILOCK value of NO and IAM files are being shared with IAM/PLEX or IAM/RLS, then it is critical to provide sufficient value for the VSAM STRINGS. Failure to do so can result in CICS deadlock situation, if a waiting for strings condition occurs. See the section below on VSAM Strings for additional information.

IAMINFO Reporting

The IAMINFO DD card can be added to the JCL for any CICS region that uses any IAM files. The actual output class should be set to a held class that is deleted once it is a month old. This will make sure that a history of the activity against IAM files is available should any performance issues arise. This output should be scanned for informational and warning messages on a regular basis. An example of the DD card is as below:

//IAMINFO DD SYSOUT=?

Where “?” is any valid JES output class

The addition of an IAMINFO DD card to the CICS JCL is normally recommended to obtain the IAMINFO reports for each IAM file that is accessed from CICS. However, if you have a large number of IAM files it is recommended instead that IAMINFO reports be generated by using the IAM SMF option. Doing this will avoid having IAM’s generation of the IAMINFO reports causing an elongation of the time required for performing an expedient CICS shutdown.

Generating a large number of IAMINFO reports to the IAMINFO DD at CICS SHUTDOWN, can introduce a delay in CICS shutdown processing, just to format and generate the reports. Changes have been made for IAM Version 9.0 to reduce the delay significantly, but it is not completely eliminated. Using the IAMSMF option would only require the collection of IAM SMF data records which is of minimal overhead. It is from the individual IAMINFO reports for each IAM file, one can determine the buffer and storage utilization during a specific CICS run. Note that an IAMINFO report is generated each time an IAM file is closed (or an IAM SMF record). There can be several individual reports (or IAM SMF records), if an IAM file is opened and closed multiple times during the lifetime of a CICS region.

Instructions and examples on using the IAMSMF program to produce the IAMINFO reports from IAMSMF records are in IAMINFO Reports and IAMSMF - IAMINFO Command of this space. You need to make sure that your procedures to offload the SMF records, includes saving the SMF record number selected for the IAM records.

IAM Overrides

IAM overrides can be used with CICS regions. To do so, an IAMOVRID DD card is added to the JCL for the CICS regions that are going to haveIAM overrides. There is no FCT entry required for the IAMOVRID file. The IAMOVRID DD should specify a physical sequential card image file or to a card image member in a partitioned data set. For example:

//IAMOVRID DD DISP=SHR,DSN=my.cics.iam.override(cics123)

With IAM Version 9.2 and above, the overrides are automatically re-read for each IAM file open, so the REREAD keyword is no longer necessary.

Some examples of IAM override cards that could be in the IAMOVRID data set are:

ACCESS DD=file1,BUFSPACE=2048
ACCESS DD=file2,BUFSPACE=32768,BUF64=YES
ACCESS DD=file2,DYNDS=1024

DEFERWRITE Override

In a CICS region, IAM will immediately write out any block that contains a randomly updated record. The effect of this is that each update, insert, or delete record request will generally cause one physical WRITE out of a block to the data set. The exception to that are when a mass delete of records operation is performed, or when the writes are performed with MASSINSERT specified, as those are sequential requests. Sequential updates are generally deferred.

If the data contained within the data set is not highly critical or time sensitive then performance may be improved by specification of an override (in the IAMOVRID DD) to CICS for those specific IAM files with the parameter DEFERWRITE=YES. The following is an example (using the abbreviated “DEFERW” for the DEFERWRITE keyword) of this parameter:

ACCESS DSN=prod.$iam.trans.log.file,DEFERW=YES

This will cause IAM to delay the writes until the buffer is needed for a different block. It is important to note the implication of data integrity with specifying DEFERWRITE=YES. When the writing out to DASD of updated data blocks is deferred, then it is possible for the updated records to not get hardened to DASD for an extended indeterminate period of time. The cautionary note is that if the system crashes, or the job is forced out, then IAM is precluded from properly closing the file (flushing the buffers to DASD). The result is potential for data (added, updated or deleted records in the blocks not written out) to be lost. The records could potentially be recovered if they are being also written out to a journal.

As such it is recommended that specifying DEFERWRITE=YES should be utilized with caution in an online environment such as with CICS.

Deferred Write Buffer Flush Process

This facility helps support the use of the above described deferred writes by periodically writing out all updated buffers. This may enable the use of deferred writes for more data sets than would normally be considered, because it makes sure that blocks are written out to the disk in a timely manner with reduced risk of lost updated data. IAM files that have been setup for DEFERWRITE=YES, will not be able to be CICS Discarded with the CEMT I FILE() command because these files will be considered in use.

The IAM CICS Deferred Write Buffer Flush Facility will setup a background CICS task that will cause IAM to write out all DEFERWRITE=YES updated file buffers to DASD every 30 seconds. Therefore, consideration can be given to using DEFERWRITE=YES for files used in a CICS region, if a 30 second delay in the writing of an updated buffer would not cause application problems in the event that CICS had an unscheduled outage and some updates within the 30 seconds prior to the outage were lost.

To enable the IAM CICS Deferred Write Buffer Flush Facility take the following steps:

  1. Define the facility’s programs and transactions to CICS. The CICS CSD update utility, DFHCSDUP, input statements can be found in the IAM ICL library member IAMBFFL.
  2. Define what CICS files will use DEFERWRITE=YES. Update your CICS region JCL to include an IAMOVRID DD statement that points at a file with IAM Access Override statements if such a DD statement is not already present. Include statements such as:

    ACCESS DD=KSDS1,DEFERWRITE=YES
  3. You start the CICS Deferred Write Buffer Flush Facility with the IASF transaction. Alternatively you can add program IAMSBFFL to your CICS startup PLT. Typically you would use the IASF transaction during the early stages of testing this facility and later you would add IAMSBFFL to the startup PLT. IAMSBFFL needs to run in the third stage of CICS initialization and so a sample startup PLT would look like this:

    DFHPLT TYPE=INITIAL,SUFFIX=SB
    DFHPLT TYPE=ENTRY,PROGRAM=DFHDELIM
    DFHPLT TYPE=ENTRY,PROGRAM=IAMSBFFL
    DFHPLT TYPE=FINAL
    END
  4. You stop the CICS Deferred Write Buffer Flush Facility to either end testing, or prepare for CICS shutdown by using the IAPF transaction. Alternatively, you can add program IAMPBFFL to your CICS shutdown PLT. Here is a sample shutdown PLT that includes the IAMPBFFL program:

    DFHPLT TYPE=INITIAL,SUFFIX=PB
    DFHPLT TYPE=ENTRY,PROGRAM=IAMPBFFL
    DFHPLT TYPE=ENTRY,PROGRAM=DFHDELIM
    DFHPLT TYPE=FINAL
    END

The IAMBFFL startup and background programs will output some messages to the CICS MSGUSR log to provide status. You will see messages such as the following:

IAMSBFFL  I nn/nn/yyyy hh:mm:ss IAM DEFERWRITE=YES Buffer Flush
Transaction/Program has been initiated

DFHSI8431I nn/nn/yyyy hh:mm:ss CICSBIAM PLT program IAMSBFFL has been invoked
during the third stage of initialization.

IAMBFFL   I nn/nn/yyyy hh:mm:ss Program to issue periodic IAM DEFERWRITE=YES file
buffer flushes is now running

IAMBFFL   I nn/nn/yyyy hh:mm:ss KSDS7    - IAM DEFERWRITE=YES File Detected -
Periodic Buffer Flushes Have Begun

Every 30 minutes, IAMBFFL will output these two heartbeat messages to let you know that all is well with the facility.

IAMBFFL   I nn/nn/yyyy hh:mm:ss IAM DEFERWRITE=YES Buffer Flush Program executing
normally

IAMBFFL   I nn/nn/yyyy hh:mm:ss Max No. of DEFERWRITE Files that had Buffer Flush
Requests issued since last report-    4

As a part of CICS shutdown, you will see the following messages:

DFHTM1718I nn/nn/yyyy 15:20:32 CICSBIAM About to link to user PLT program IAMPBFFL
during the first stage of shutdown.

IAMPBFFL  I nn/nn/yyyy 15:20:32 IAMBFFL Shutdown begun - Forcing last wakeup
RESP=00000000 RESP2=00000000

Buffer Management

Buffer management for IAM is performed outside of the sphere of both VSAM and CICS. This allows IAM to dynamically adjust the buffer management techniques based on the actual type of I/O being performed by the application. This eliminates the tuning decisions of NSR or LSR and the number of index buffers. There are no index component buffers used by IAM, as the index always reads into virtual storage when the file is open. IAM’s Real Time Tuning will adjust the number of buffers being used by a data set automatically as needed, based on default limits set by the installation (in theIAM Global Options Table), or based on any provided overrides in the “//IAMOVRID DD” file.

The IAM CICSBUF64 Option controls whether 64-bit addressable virtual storage will be used for IAM data buffers in CICS address spaces. If the option is enabled, a 64-bit storage will be used and if it is disabled, then a 31-bit storage will be used. The default is Enabled.

The IAM CICSBUFSP=nnnnn parameter defines the default maximum amount of storage that IAM is to use within a CICS region for allocation of its buffers per opened IAM file. The default value is 16384, in K byte increments (16 Megabytes), that IAM will attempt to utilize for buffers for files opened under a CICS region. As CICS regions vary considerably, it is advisable that customers should carefully evaluate this default value, and revise it either higher or lower as is appropriate for their environment and requirements. If the CICSBUF64 option is overridden and set to Disabled, then the CICSBUFSP parameter should be lowered to its pre IAM 9.4 default of 1024, whichever value was used for prior IAM releases, or a value that makes sense for your environment, knowing that only 31-bit virtual storage is available.

When a lower value is specified for CICSBUFSP, customers will have to provide individual IAM overrides for the busiest files. These individual overrides for the busiest files will allow a sufficient number of buffers to meet the needs of the CICS applications accessing the files. The default value for CICSBUFSP in IAM is set to allow sufficient buffers for the majority of the files, which then reduces the number of files that would need specific IAM overrides for buffering, and thus, make the IAM product easier to use.

The formula for initially estimating the maximum storage that could be used by IAM within a CICS region for buffers, is simply to multiply the CICSBUFSP value times the anticipated number of concurrently open IAM data sets. This is then the maximum amount of storage that IAM will utilize for buffers in a CICS region. This should not constrain the CICS region’s overall utilization of storage, hence it is imperative that a careful monitoring of file activity and storage utilization be done on a regular and ongoing basis.

An appropriate default value for any particular CICS region is going to vary depending on the number of IAM files within the region, the virtual storage limitations within the CICS region, the REGION value specified on the JCL, the volume and type (READs, WRITES, etc.) of IAM file I/O activity, and the response time objectives of the CICS region. Often it is found, that it is more efficient utilization of storage to specify individual BUFSP or MAXBUFNO override values, for selected high activity IAM files, and reduce the default CICSBUFSP value for the remaining files to a lower amount. The following is an example of specifying the BUFSPACE parameter for a single data set (using the abbreviated BUFSP keyword) and the MAXBUFNO for a different data set (using the abbreviated MAXB keyword):

ACCESS DSN=PROD.$IAM.TRANS.LOG.FILE,BUFSP=1024
ACCESS DSN=PROD.$IAM.MSTR.ACCTS.FILE,MAXB=2048

VSAM Strings (STRNO)

The recommendation for number of strings is the same as it is for VSAM. Basically, the value for number of strings to specify is the maximum number of concurrent transactions that are active and browsing or updating a particular data set. It is also advisable to increase this value by a few extra strings (according to CICS documentation that is 20%) to accommodate any general random reads that could be concurrently active against the data set, to avoid string waits.

IAM behaves the same way VSAM does regarding strings. A string holds the position in a file as long as a position type request is issued. CICS monitors string usage (either at the file level for non-LSR, or at the LSR buffer pool level), and if all of the strings (place holders) are in use, CICS puts the requesting transaction in a string wait, until a place holder becomes available. For non-LSR this is done to prevent VSAM from acquiring more storage for strings, and for LSR it is done to prevent VSAM from failing the request due to insufficient strings.

If an installation does not specify an adequate number of strings to support its concurrent I/O activity, then string waits can occur resulting in a negative impact on response times. The vast majority of string waits that occur, are because an installation has specified an inadequate number of strings to handle their processing demands. Things that affect the number of strings needed include the I/O device response time, the amount of buffering being done (more buffers reduce physical I/O, therefore making I/O requests run faster and hence normally reduced need for strings), and the amount of browsing or updating being done to a file.

String numbers are not all that important to IAM itself, because IAM will acquire additional strings as needed. However, CICS monitors string utilization and will internally delay I/O to a file if CICS believes that all the strings are in use. So proper setting of STRNO is important to prevent CICS from delaying I/O requests.

Some additional notes on Strings:

  1. When using IAM/PLEX or IAM/RLS and the default CILOCK value of NO, it is critical that a sufficient number of strings be specified. With CILOCK=NO, CICS will cause a transaction that is holding a record lock to release a string. If other transactions require the same record lock, they will go into a wait for the record lock while holding their string. If no strings are available when the owning transaction is ready to update the record then a deadlock condition occurs. The recommendation is to run with a CILOCK value of YES or provide a sting value that exceeds the maximum task value to make sure there will never be a string wait.
  2. CICS will reserve a proportion (20%) of the STRINGS value to be used for read-only requests. Hence, tasks in CICS may go into a string wait situation while the total tasks currently active against a file is less than the total number of STRINGS specified.
  3. In setting the STRINGS value for an ESDS file, consider that if the ESDS is a log file where it only gets records “added” (WRITE I/O operation), then set STRINGS to 1 (one). If a STRINGS value were set to be greater than 1, it can lead to exclusive control conflicts where other tasks may be attempting to write to the ESDS file at the same time. A STRINGS value of 1 (one) will preclude this as CICS (via the CICS dispatcher) and will enforce serialization and single threading of activity to the file.

    ForIAM ESDS files, if the order of records from concurrently running transactions can be intermixed, consider setting the IAMGlobal Option DISABLE=ESDSLOCK and using a higher value for number of STRINGS. These changes will further enhance the performance of transactions using IAM ESDS files.
  4. The use of LSR buffering has an impact on how CICS monitors STRINGS. The effect is that there are two limits on strings: one for the individual file and the other for the total LSR pool to which the file is defined. In general, the actual number of strings acquired by CICS for an LSR pool will be less than the cumulative total for all of the data sets in that pool, unless STRINGS is explicitly specified with a larger value on the LSR pool definition. In a particular LSR pool, one could run out of strings before the actual limit for any particular file in the pool was reached. This is a CICS arbitrary mechanism for controlling tasks and file access within CICS, not a VSAM or IAM attribute. While IAM does not use any of the strings for the LSR pool, CICS thinks that it does, and includes IAM files in the string usage for an LSR pool.

NSR or LSR

IAM’s Real Time Tuning buffer management techniques are automatically used for all IAM files, so the specification of the type of buffering has more of an impact on how CICS is interfacing with IAM than with any actual influence on IAM buffers or buffer management. The primary difference is how asynchronous processing is achieved, such that the access method will not go into a wait.

With NSR, the typical asynchronous mode is used where CICS will issue WAITs completely under its control, and when the I/O request is posted complete, CICS will then issue the CHECK macro. Another distinction of NSR buffering is that the buffers are actually obtained by VSAM itself. So when IAM is being used, IAM will acquire the buffers and there are no VSAM buffers obtained.

With LSR buffering asynchronous processing is achieved through use of the VSAM UPAD exit, in which whenever the access method (IAM or VSAM) needs to wait it will give control back to CICS through an exit mechanism. The UPAD exit mechanism provides CICS with full control of when VSAM and IAM processing actually occurs and is more efficient than the typical asynchronous processing. With LSR buffering, CICS has to invoke a VSAM service to build the buffer pool. IAM does not utilize any of the buffers in the LSR pool, so the down side is that if LSR buffering is specified there may be some under utilized buffers that could be using up some valuable virtual storage.

The use of DTIMEOUT, FORCEPURGE, KILL or other methods of eliminating a transaction while it is performing I/O to an IAM file that is using the LSR pool, can result in some or all subsequent I/O requests being hung. This is due to the UPAD interface CICS uses to achieve asynchronous I/O processing with LSR buffers. With the UPAD, IAM (and VSAM) depend on CICS to return control to the access method code to complete I/O requests. If a transaction is eliminated by one of the above actions while an I/O request is active, then resources used for that I/O request may not be released, resulting in subsequent I/O requests waiting for that resource. This can end up stalling all I/O to that data set due to all of the strings available being used by I/O requests waiting for the held resource. We expect that VSAM data sets are susceptible to similar types of problems. A circumstance where I/O might be delayed is when IAM needs to get more DASD space and another volume is needed.

To prevent this problem from occurring, users are advised to use NSR buffers, particularly if DTIMEOUT has been specified. Alternatively, the users could increase the amount of time for a deadly lockout timeout to occur if they are using that function.

The use of NSR buffers under CICS rely on standard I/O processing to achieve asynchronous I/O through the use of standard I/O processing protocol to achieve asynchronous I/O through the use of scheduling an asynchronous exit (IRB) to perform the processing needed to complete the request, which is not subject to CICS giving the access method code use of the processor. The use of NSR buffers will not affect the performance of IAM.

Users should keep in mind that the use of DTIMEOUT, FORCEPUGE or KILL of a transaction is for emergency recovery processing, as an attempt to prevent a system outage. Use of these procedures may not be successful, or may be partially successful, potentially leading to a system outage or partial failure of some functions.

General recommendations are when first converting from VSAM to IAM, leave the buffering specification as it is with VSAM. Subsequently, once IAM is proven, then consider revising the buffering to minimize impact on the most constrained resource. If virtual storage is constrained, then either use NSR buffering, or manually reduce the amount of buffers acquired by CICS for the LSR pool. If CPU time is the most constrained and the files are not in an FOR, then use LSR, reducing buffers for the LSR pools as appropriate.

One other consideration is that files that are in Data Tables must be in the LSR pool. IAM does provide the Dynamic Data Space function, that may for some applications be a good alternative to the CICS Data Tables, and can be used with either NSR or LSR buffering.

Data Buffers (BUFND)

The BUFND parameter is usually found in JCL as an option on the AMP keyword. In general, there are no JCL changes required to access IAM files in place of VSAM, no matter whether it is a batch job or underneath a CICS region. The only required JCL parameters on the DD card for an IAM file are the DSN and DISP. Typically, CICS regions have their files defined in the CICS DFHCSD file via RDO such that VSAM or IAM files are dynamically allocated, thereby not requiring DD statements in the CICS startup JCL.

The corresponding BUFND parameter for the CICS RDO File Definition is the DATABUFFERS attribute.

Note

INDEXBUFFERS attribute, which is equivalent to the BUFNI in the AMP option of a DD JCL statement, will always be ignored by . This is because  does not need or use index buffers.

If there are no MAXBUFNO or BUFSPACE overrides for the data set, IAM will set the maximum number of buffers (MAXBUFNO) to the value specified by the BUFND (DATABUFFERS), f it is greater than the default value for maximum buffers calculated from the IAM Global Options table.

For CICS files not in a LSR pool, the value for the DATABUFFERS attribute of a CICS File Definition in RDO will result in changing the BUFND value in the ACB that is subsequently used by VSAM.

IAM Index Storage

CICS regions, both historically and currently, are often beset with storage constraint issues. Every resource within a CICS region, takes up some sort of storage “footprint”. Hence, the use of storage within a CICS region is one critical resource that needs to be monitored attentively and periodically, as the composition of the activity within a CICS region changes over time.

The IAM product provides the means to efficiently utilize CICS storage resources for IAM files. One of the ways of significantly reducing the IAM storage footprint within a CICS address space is where the index to open files is kept. The INDEXSPACE option allows for large amounts of storage that might have been utilized within the CICS address space to contain IAM file indexes to be offloaded to separate z/OS storage areas that reduce storage contention. The index storage can be from either storage above the 31-bit addressable storage area known as 64-bit addressable virtual storage, or from a z/OS data space. The area selected is called the IAM Index Space, with default values set in the IAM Global Options INDEXSPACE or with the INDEXSPACE IAM override.

The INDEXSPACE=64BIT is the default option as IAM is shipped and that applies to CICS regions. When the first local IAM data set is opened under CICS, IAM checks to see if there is at least 1024 megabytes available, based on the DATASPACE Global Option value that defaults to 1024 megabytes. If there is sufficient 64-bit storage available, then IAM will use that storage. To make sure that 64-bit addressable storage is used, users can specify a MEMLIMIT on the EXEC card, or in the active SMFPRMxx member in parmlib.

If sufficient 64-bit virtual storage is not available, IAM will attempt to acquire a z/OS data space. The size of the Data Space requested for the Index Space is taken from the IAM Global Options Table, using the value specified for DATASPACE. Note, that this is the same value that is used for the data space obtained for a file load. The z/OS Data Space that contains the Index Space is initially created to be extend-able, so it can be expanded in size as needed, with the maximum size set to four times the specified DATASPACE value (up to the 2 gigabyte limit).

The default value for the data space size is 1024 megabytes. The IAMINFO report generated at file CLOSE contains information on the data space usage for the particular dataset, and to the total data space usage for the job (or CICS region). It is recommended that these values be monitored, in the event that the value for the default data space size needs to be increased.

DYNDS

The IAM Dynamic Data Space (DYNDS=nnnn) feature provides additional benefit in a CICS online environment by using a z/OS Data Space for the caching of random accessed records from an IAM data set. This is implemented as an enhancement to the Dynamic Tabling option of IAM with the table now contained within a separate z/OS Data Space. The Dynamic Tabling is a special feature that keeps the most frequently referenced randomly read records in a virtual storage cache. As Dynamic Tabling is record oriented, and is most useful when a subset of the records scattered throughout the file are repeatedly referenced, potentially reducing the virtual storage needs to have a large number of buffers. This Dynamic Data Space provides substantial benefit particularly for some online applications, by eliminating physical I/O as records are repeatedly retrieved from the table. Backing the Dynamic Table with a z/OS Data Space allows the dynamic table to contain a larger number of records (that can be contained within a maximum DYNDS=2048, which is a 2 gigabyte Data Space).

IAM uses algorithms to manage the storage used within the Data Space to maximize the effective utilization of the storage resources and provide efficient record search with reduced CPU overhead when inserting and removing records from the Dynamic Table in the Data Space. IAM utilizes a true least recently used algorithm for record selection when IAM is required to make room for more recently referenced records. The use of the Dynamic Data Space is on a file by file basis, with each individual data set for which the function is requested using its own data space.

The capability is easily implemented by the use of the IAM ACCESS Override of DYNDS. Using the “DYNDS=” override, users can simply specify the amount of storage thatIAM should use for the Dynamic Data Space in megabyte increments, up to a maximum of 2048, or 2 gigabytes. The virtual storage required for management of the Dynamic Data Space will come out of the Data Space itself so that use of the Dynamic Data Space will not impact virtual storage resources within the CICS address space.

The following is an example of using the DYNDS option for a single IAM dataset in which a data space of 1024 megabytes will be created:

ACCESS DSN=PROD.$IAM.MSTR.ACCTS,DYNDS=1024

Monitor the paging statistics for the CICS address space. The use of additional z/OS data spaces to hold records and buffers in memory comes at a cost, where the cost is a potential increase in paging. This additional paging that can be attributed to the CICS address space, will slow down the CICS address space from being dispatched from a z/OS perspective. Paging operations cause a z/OS address space to stop while the “page in” operation occurs. That is a potential negative impact on an online system such as CICS. Conversely, the positive impact will be with DYNDS of having randomly read records available within memory which translates to reduced response times within a CICS environment.

CACHE64

The CACHE64 IAM access override option indicates that IAM should utilize 64-bit addressable storage as a cache area for Enhanced-format IAM files to contain the file’s extended overflow blocks. It is generally not recommended that the CACHE64 options be utilized within an online processing environment such as CICS. The CACHE64 option is better suited for batch jobs that process IAM data sets with large overflow areas, especially when they are being read sequentially. For randomly read IAM data sets, as is more typically the method of access performed underneath CICS, the potential benefit of CACHE64 is minimal. Hence, it is not generally recommended for CICS regions.

VSAM Sub-tasking

The CICS initialization parameter VSP=1, controls the attaching of a concurrent TCB within the CICS region. This concurrent TCB is utilized by CICS to offload File Control VSAM WRITE I/O requests onto a separate subtask. This is known as the “CO” (Concurrent) TCB. The “normal” or traditional CICS subtask is the “QR” (quasi-reentrant) TCB. It is on the QR TCB that the remaining I/O operations are performed.

The support for this was originally put into CICS to prevent any VSAM CI/CA split activity that may result from a WRITE I/O operation from holding up the dispatch of transactions on the QR TCB. With IAM, there is no CI/CA split activity that occurs as in traditional VSAM, hence there are no special considerations. IAM fully supports I/O requests from concurrent multiple subtasks.

Threadsafe Open API Options

The ability of CICS File Control I/O requests to be processed underneath a separate MVS TCB other than the traditional CICS “QR” TCB (Quasi-Reentrant TCB) was introduced at the CICS TS V3.1 level, when CICS File Control was allowed to be “Threadsafe”. This allows IAM’s individual I/O requests to be executed now underneath CICS TCBs, known as L8 or L9 TCBs. In environments with transactions accessing both DB2 and VSAM (IAM), there is no need to incur a task switch back to the QR TCB by CICS to perform the VSAM (IAM) File control requests. The transaction remains on the OPENAPI TCBs of L8 or L9 after having been switched to an L8/L9 TCB when the first DB2 request was issued. This reduces the instruction path by approximately 2000 instructions for each direction of a task switch within CICS. (Going from the single QR TCB to L8/L9 TCB or back from an L8/L9 back to the single QR TCB costs approximately 2000 instructions per switch.)

All the IAM programs (whether they are running as CICS GLUE or TRUE exits within CICS or as lower level Access Method code) are programmed to be ‘THREADSAFE” per CICS requirements.

This multiple CICS TCB sub-tasking of application programs using the OPENAPI option to enable a program to run on a non-QR TCB, can cause some discrepancies in attributing and correlating I/O overhead (specifically IAM I/O response) back to a specific z/OS TCB within the CICS address space. The I/O overhead of accessing an IAM data set can largely be attributed to the QR TCB, except for the previously mentioned WRITE requests that are attributed to the CO TCB. Additionally, in the event that IAM is running underneath CICS TS V3.2 and higher levels, where the File Control Domain VSAM activity is running as “Threadsafe”,  IAM activity can run concurrently on multiple OPENAPI TCBs (L8 or L9 TCBs). The net effect is that I/O overhead (IAM I/O response) is then distributed across those OPENAPI TCBS underneath which IAM I/O requests ran.

CICS RDO OPENTIME Parameter

Defining an IAMfile to CICS, whether using CICS RDO (Resource Definition Online) or using CICS DFHSDUP utility define statements, is the same as it is for VSAM files. However, there are some implications that specifying select options with an IAM file will cause different behavior than with traditional VSAM. This includes the CICS File Definition parameter:

OPENTIME ([FIRSTREF STARTUP])

This parameter specifies when a file is to be opened in a CICS region.

Implications of OPENTIME Options

When an IAM file is opened underneath CICS (without IAMRLS), IAM will build its internal index structure. This necessitates a complete full scan of the file to obtain all the relevant index values. The index structure is placed in the INDEXSPACE (which is an IAM managed area of virtual storage, either in 64-bit addressable storage or a z/OS data space) attached to the CICS region. The result is if STARTUP is specified for all files, this will cause an elongation of CICS startup while IAM performs its index structure build function. However, if “FIRSTREF” is specified than this index build overhead/delay will be incurred by the first transaction that accesses or references the IAM file, with minimal delay during CICS initialization.

Additionally, if IAM files are repetitiously OPENED and CLOSED, and subsequently REOPENED within CICS, then the overhead of the index build can become noticeable as the interval between OPENS/CLOSES decreases. There are some packaged applications that have common I/O routines that always issue an EXEC CICS OPEN and CLOSE requests bracketed around the access requests (READ, WRITE, etc) to a VSAM file. In CICS this can cause unnecessary overhead and delay when the accessed file is an IAM file, as the in memory (or INDEXSPACE Data Space) internal index structure is rebuilt each time the file is OPENED.

CICS RDO RLSACCESS Parameter

The CICS RLSACCESS parameter must always be set to NO for IAM files, even when used with IAM/RLS and IAM/PLEX. Specification of RLSACCESS(YES) is not valid and will not work on IAM files.

CICS Shutdown

Under CICS the OPEN and CLOSE processing occurs underneath the FO (File owning TCB). Consequently, this can become a serially threaded bottleneck at CICS shutdown, as all the CLOSE requests are being processed. If the default CICS shutdown process is left intact, then there is a possibility that S33E abends can occur. This can occur when the "NORMAL" CICS shutdown request is issued. Typically, this is done via the "CEMT PERFORM SHUTDOWN" request. The default action of this command in CICS is to cause the invocation of the IBM CICS supplied CESD transaction.

The transaction "CESD" defaults to invoking the IBM supplied program "DFHCESD". This program is one of the “User Replaceable Modules” (URM) within the CICS TS product. The DFHCESD program is supplied in both source and executable formats. Additionally, IBM supplies alternative COBOL and PL/I sources (DFH0CESD and DFH$CESD respectively). These sources are in the CICS “SDFHSAMP” libraries. Within the programs the specified default delay between the initial CEMT PERFORM SHUTDOWN command’s invocation of this shutdown assist program DFHCESD via the CESD transaction and subsequent "re-invocation" of the same transaction/program by an internally issued START TRANSACTION(CESD) DELAY(hhmmss) command is 120 seconds (two minutes).

On re-invocation, the Shutdown Assist Program (DFHCESD, DFH$CESD or DFH0CESD) will issue internally an "IMMEDIATE" shutdown. (via an "EXEC CICS PERFORM SHUTDOWN IMMEDIATE" program interface call.)

Since this CICS Shutdown Assist program is one of the "User Replaceable Modules" within CICS, it is designed to be modified by the user. The modified source program can be reassembled (or recompiled if using the COBOL or PLI versions) to generate a new executable module. Thus, the user can change the default DELAY value within the program to whatever value is appropriate within their CICS environment to make sure a clean and orderly shutdown.

If the user's CICS regions are such that they have numerous VSAM and IAM files to close and that the CICS region is not getting sufficient z/OS dispatch or CPU cycles, they will potentially incur the situations where a file close request doesn't complete on a subtask before the CICS main task unilaterally issues a z/OS DETACH of its subtasks.

In these situations a S33E ABEND can occur. This often is more noticeable in situations where the subtasks are communicating with external address spaces from the CICS address space to cause a "file Close" (such as to Transactional VSAM or IAMRLS), or to a "database disconnect" with IMS DBCTL, or to a DB2 subsystem, or to another third-party database subsystem.

It is recommended that the DELAY value in one of the three sample programs be increased by an extra amount of time to make sure that all subtasks have sufficient time to process all closes and disconnects. Subsequently, monitor the shutdown time to reduce this value incrementally to the minimum time required to make sure a clean and orderly shutdown of the CICS regions.

IAM/PLEX & IAM/RLS

Using IAM/PLEX or IAM/RLS, allows CICS online regions to concurrently share IAM files with other CICS regions, and batch jobs with read write integrity requiring minimal change to CICS. The IAM file is actually owned and opened within the respective IAM/PLEX or IAM/RLS address space. IAM provides some CICS exit routines, so that IAM can recognize and properly handle key points in a unit of work and transaction processing. To make sure the integrity of the IAM file, CICS exits are activated by an IAM initialization module that is defined in the CICS PLT that is invoked during CICS initialization. The entries that are required are as follows:

DFHPLT TYPE=INITIAL,SUFFIX=X1
DFHPLT TYPE=ENTRY,PROGRAM=IAMXCINI
DFHPLT TYPE=ENTRY,PROGRAM=IAMXCINI
DFHPLT TYPE=FINAL
END

The use of IAM/RLS or IAM/PLEX requires CICS to have a minimum level of CICS TS V5.2.

The IAM load library that has modules IAMXCINI, IAMBCICS, and IAMXFCBO must be part of the DFHRPL concatenation for CICS. This will allow  the use of IAM/PLEX and IAM/RLS functionality in the CICS region. The IAMXCINI program will install and activate the IAM provided Task Related User Exit (TRUE) “IAMBCICS”, the IAM provided Global User Exit (GLUE) “IAMXFCBO” at the CICS File Control XFCBOUT exit point. The three modules IAMXCINI, IAMBCICS, and IAMXFCBO are defined to CICS in DFHCSD with the RDO attributes of:

LANGUAGE=ASSEMBLER
RELOAD=NO
DATALOCATION=ANY
EXECKEY=CICS.

Refer to Record-Level-Sharing on IAM/RLS and IAM-PLEX-Record-Level-Sharing on IAM/PLEX for additional information on using CICS with the IAM record sharing services.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*