Global Option Settings


The easiest way to minimize the need for manual tuning is to use a good base of default parameters. With IAM, these parameters are provided through the IAM Global Options Table. The default Global Option values selected for IAM Version 10.0.01 will provide a high level of performance for the great majority of IAM data sets and users. Provided below is a discussion of considerations for various Global Options that have a direct impact on IAM performance.

Related topic

Data Compression

IAM defaults to using data compression for any dataset that is defined as being 75 tracks (5 cylinders on a 3390 device) or larger. The benefits are reduced DASD space requirements, reduced virtual storage requirements for indexing the dataset, and reductions in physical I/O because there is effectively more data in each block. The cost is additional CPU time to process the data set. For many data sets with record sizes of around 500 bytes or less, IAM CPU time with data compression is still lower than VSAM without compression.

When the key performance objective is to reduce CPU time, then either limiting or eliminating the use of data compression is a good choice. Data compression can be effectively turned off for all except ESDS type of files by setting the Global Option DATACOMPRESS=99999999. Some users have limited data compression to only the largest files where space savings are significant by raising the default value. For example, only allowing files that are 500 cylinders or larger can be done by setting the Global Option DATACOMPRESS=7500.

The decision, whether to use data compression or not, is of higher relevance than the actual type of compression (software or hardware). Regardless of the type of compression used, there is going to be a CPU time cost. The key performance difference is that hardware compression has higher CPU time overhead on file loads and reorganizations, but slightly lower CPU time than software on file access. From a performance perspective, if the IAM files are subject to frequent reorganizations or file loading, then software compression is the better choice for CPU time considerations. For files where loading and reorganizations are infrequent, the hardware compression is the better choice for CPU time considerations. Space savings is going to vary based on the data. For some files, software will do better, and for other files, hardware will do better. A customized hardware compression dictionary built with CSRBDICT (see Using-Hardware-Compression) will generally provide the most space savings. The default compression type is specified by the COMPRESSTYPE Global Option.

The basic recommendation for data compression is to decide on your performance objectives and set the DATACOMPRESS Global Option appropriately. Many users go with the defaults to gain some DASD space savings. If CPU time reduction is of high importance, as it is for many installations, then globally turn off data compression by setting DATACOMPRESS=99999999.

Buffer Space

Please note that values are specified in kilobyte units (1024 bytes) as opposed to the byte level. The default for batch and TSO buffering is BUFSP=65536. (64 megabytes) IAM calculates the default MAXBUFNO from this Global Options Value. This recommended value, causes MAXBUFNO to go up to just over eighty cylinders worth of buffers on 3390 types of devices. IAM will start out with approximately one cylinder’s worth of buffers per file, however, it will not exceed the size of the data set. Subsequent, to OPEN with IAM’s Turbo mode set (which is the default), IAM can quickly increase the number of buffers if necessary for files that have very heavy I/O requirements. Users desiring to be less aggressive or that have batch jobs with more than 30 IAM files that are heavily accessed, may want to reduce this value to perhaps 32768.

For CICS regions, the default is CICSBUFSP=1024 (1 megabyte). This value will be used by IAM to calculate MAXBUFNO when running under CICS. This Global Option was added so that customers could give their batch work a larger quantity of buffers automatically, while not giving as many to CICS due to the frequent virtual storage constraints that are encountered under CICS.

The tuning recommendation for CICS is to use a CICSBUFSP value that provides good performance for the majority of files, and then utilize the IAM overrides to increase buffers for the files that are subject to very high activity. This strategy helps to control storage use which is quite critical for CICS regions.

For large files, it is beneficial to have a lot of buffers open to facilitate reading the index into storage. IAM will reduce the number of buffers at completion of file open processing to either MINBUFNO, if it has been overridden, or to the number of strings (STRNO), that were specified for this data set. This will prevent the circumstance for files that have very low activity from having a lot of buffers that are not being used and wasting virtual storage. Users may be able to leave CICSBUBSP at the default value rather than reducing it, and minimize the need for overrides.

The default values for BUFSP and CICSBUFSP were chosen, along with defaulting to Turbo mode, to reduce the need for using overrides to increase buffering. Installations can adjust these values as necessary to achieve their desired performance results.

64-Bit Virtual Storage for Buffers

As of Version 9.2, IAM has the ability to utilize 64-bit virtual storage for buffers. This is ideal for use with CICS, IAM/RLS or IAM/PLEX regions that have a lot of open IAM files and are experiencing some 31-bit virtual storage constraints. As of Version 9.4, 64-bit, virtual storage for buffers in CICS regions is the default. 64-bit storage buffers can be set as a default for all address spaces by the IAM Global Options. However, as it does cause some increase in CPU time, BMC recommends that it be used selectively when needed. The increase is based on actual number of EXCPs performed, so the overall percentage will vary.

Below the Line Storage

Below the line storage utilization is also a critical resource for CICS. IAM typically utilizes 4K of below the line storage per open fie. This is significantly reduced through the use of the BELOWPOOL feature, which allows IAM to share the I/O control blocks and channel programs between multiple files, reducing this storage requirement. This is controlled with the BELOWPOOL Global Option. BELOWPOOL=YES is enabled as a default. Using this default is highly recommended.

File Load Buffering

The next recommendation is to leave CRBUFOPT=MCYL, to cause a file load to buffer up to one cylinder worth of data, while physically writing a full cylinder of data. This is the optimal buffer setting for file loads. CICS users should consider specifying a CREATE override such as the following:

CREATE DD=&ALLDD, CRBUFOPT=64BIT

This will use 64-bit virtual storage for the file load, and alleviate below or above the line virtual storage concerns.

ESDS Processing

If your installation is using IAM ESDS types of data sets that are updated, then make sure to set ESDSINTEGRATED=5. This will allow some room for record updates that require more space after data compression without having to use the Extended Overflow area. The cost is more DASD space usage to load the data set initially, but if the data set is updated, this will prevent the use of DASD space, virtual storage, and I/O for the Extended Overflow area.

Variable Overflow

The recommendation is to leave this Global Option set to the default value of VAROVERFLOW=YES. This enables IAM’s variable overflow that will result in more data records in each overflow block. This reduces DASD space requirements for the overflow area, plus may also help reduce physical I/O’s to the overflow area. The savings, particularly for data sets defined with very large maximum record lengths can be substantial. The disadvantage is, that records that are repeatedly updated, may have to be moved to a different overflow block from time to time if the record length increases.

PRO Prime Related Overflow

The PRO (Prime Related Overflow) option is an alternative to the basic IAM overflow area structure, that uses a block level index to the overflow area, instead of a record based index. PRO has proven to be quite beneficial for data sets that will have a close to a million or more records in the overflow area. The benefits are reduced virtual storage for the overflow index, faster file open processing, and improved sequential processing. See Prime-Related-Overflow for further information.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*