Space announcement This documentation space provides the same content as before, but the organization of the content has changed. The content is now organized based on logical branches instead of legacy book titles. We hope that the new structure will help you quickly find the content that you need.

Report Samples


This section shows samples of various report sections produced by Abend-AID.

Report Sections

Abend-AID report sections are shown on the pages that follow.

Several sections have standard content regardless of the programming language: the Header, Registers, Trace, File, and Epilog sections. Other sections provide information specific to the programming language and database in use. Only those sections for which information is available at the time of the error are presented in the Abend-AID report. For example, you will not have a Db2 section if the program does not access Db2.

Press Enter to go directly to the Analysis of Error section. You can directly obtain any report section by entering its identification number or name in the SELECT SECTION field. Alternatively, you can use the cursor point-and-shoot method by pressing the Tab key to position the cursor on any highlighted section identification name, and then pressing the Enter key.

Header Section

The Header section, shown in Header section – COBOL, is the first section of the Abend-AID report. This is a standard section that always provides the same type of information. The Header section identifies:

  • Date and time of the error
  • Job name
  • Job number
  • Step name
  • Operating system release level
  • Licensee name and number
  • Abend-AID release number.

Other information given in the Header section includes:

  • CP FMID
  • System on which your program was executing
  • DFP release level
  • JES2 release level
  • CPU model number

Header section – COBOL

aacrptov00145.jpg

Analysis of Error Section

The Analysis of Error section usually provides enough diagnostic information to resolve the problem. The cause of the error and corrective actions are described. The information varies, depending upon the programming language used. For external errors like S813-04, the diagnosis includes:

  • Cause of the error
  • DDNAME and data set name (whenever possible).

Analysis of Error Section – S813 Example

aacrptov00147.jpg

In the example above, the file name from the DD card did not match the name on the tape label. Abend-AID provides a quick resolution to this problem. Change the JCL and resubmit the job.

Analysis of Error Section – SFCC in Language Environment

aacrptov00149.jpg

In this example, an application program identified in the Header section was compiled with a higher level of LE than was used for execution. Abend-AID identifies the type of fault, SFCA or SFCC, and the problem.

For data-related errors, the Analysis of Error provides:

  • Type of error
  • Fields in error
  • Location (displacement) of the fields within their respective base locator cell number
  • Contents of the fields in error
  • Description of the error.

For reports with XLS support enabled, you can use the cursor point-and-shoot method to display the Program Listing section by pressing the Tab key to position the cursor at the highlighted text and pressing Enter.

Analysis of Error Section with COBOL XLS

aacrptov00151.jpg

Analysis of Error Section with COBOL Basic Support

aacrptov00153.jpg

Analysis of Error Section with PL/I XLS

aacrptov00155.jpg

Analysis of Error Section with PL/I Basic Support

aacrptov00157.jpg

Analysis of Error Section with Assembler XLS

aacrptov00159.jpg

Analysis of Error Section with Assembler Basic Support

aacrptov00161.jpg

Error Location Section

The Error Location section provides information that can be used to locate the statement in error. Information presented in this section includes:

  • Program’s compile date
  • Program’s link-edit date
  • Program name and module lengths
  • Load module name and the load library name
  • Location of the last I/O operation or subroutine call, if applicable.

For reports with XLS support enabled, you can use the cursor point-and-shoot method to display the Program Listing section by pressing the Tab key to position the cursor at the highlighted text and pressing Enter.

Error Location Section with COBOL XLS

aacrptov00163.jpg

Error Location Section with COBOL Statement in Error from ABNLLPDS

aacrptov00165.jpg

Error Location Section with COBOL Basic Support

aacrptov00167.jpg

Error Location Section with PL/I XLS

aacrptov00169.jpg

Error Location Section with PL/I Basic Support

aacrptov00171.jpg

Error Location Section with Assembler XLS

aacrptov00173.jpg

Error Location Section with Assembler Basic Support

aacrptov00175.jpg

Trace Section

The Trace section is a standard section that shows you which programs were called and in what order. Also provided in this section are the Application Program Attributes for all application programs in the Call Trace Summary. The Trace section gives the following information:

  • Called/linked programs on the save area chain
  • Program locations where the calls occurred
  • Program in error, when available
  • Library that each load module in the Call Trace was loaded from
  • Application Program Attributes listing:
    • Each program’s name and its load module
    • Compile date, length, and language for each program.

For reports with XLS support enabled, you can use the cursor point-and-shoot method to display the Program Listing section by pressing the Tab key to position the cursor at the highlighted values and pressing Enter.

The following figure shows a sample Trace section.

Trace Section

aacrptov00177.jpg

Registers Section

The Registers section is a standard section that displays Supporting Environmental Data, which identifies:

  • Abending program status word (PSW) and program PSW
  • Entry point (EPA) and load point addresses (LPA)
  • Instruction length code (ILC)
  • Register contents and descriptions at the time of the error
  • Load module name.

Registers Section

aacrptov00179.jpg

Program Storage Section

The Program Storage section formats program storage for application programs on the calling chain.

Program Storage Section with COBOL XLS

aacrptov00181.jpg

Program Storage Section with COBOL Basic Support

aacrptov00183.jpg

Program Storage Section with PL/I XLS

aacrptov00185.jpg

Important

PL/I basic support and XLS do not support based variables with the ALLOCATE statement.

Program Storage Section with PL/I Basic Support

aacrptov00187.jpg

Important

PL/I basic support and XLS do not support based variables with the ALLOCATE statement.

Program Storage Section with Assembler XLS

aacrptov00189.jpg

Program Storage Section with Assembler Basic Support

aacrptov00191.jpg

Program Listing Section

Available only with XLS. The Program Listing Section displays the program source code and identifies the current statement. The current statement in the program in error is either the actual statement in error or the last call. The current statement in any program other than the program in error indicates the last known call in that program.

Program Listing Section with COBOL XLS

aacrptov00193.jpg

Program Listing Section with PL/I XLS

aacrptov00195.jpg

Program Listing Section with Assembler XLS

aacrptov00197.jpg

File Section

The File section is a standard section that provides information for every file open at the time of the error. This section identifies:

  • Data Management Control Block information
  • DDNAME and data set name
  • File statistics
  • Current and previous record information, when available
  • Other information that is based on the type of file access method used.

VSAM

  • DD Name: The Data Description name, which is the name of the file used in the program. It forms a way to relate the file reference in the program with the Dataset in the JCL. There is potential in an error if the DD name is intended to be called but was mistyped in the calling program.

  • Dataset Name: The name of the data set. Potentially, incorrect naming of the DSN in another program could cause an error. It is advised to ensure the correct DSN was specified if there is an issue.

  • Access Type: Refers to the way data is accessed and organized within the data set. There are four main access types: Base Cluster, Alternate Index, Path, and Control Interval Access (CI Access.) This is typically set by the user when creating the data set. If the logic of a program does not support the access type, or vice versa, consider revising the type or modifying code to match the type

    • Base Cluster: The primary access type for VSAM. Accessing the base cluster means you are interacting with the entirety of the dataset, including its index and data components. The data components hold records for the data set types. This will be further explained in the Data Set Type section.

    • Alternate Index: This allows data to be accessed through additional keys, rather than the primary key of a KSDS. It allows for searching records using alternate paths to reach the same data, which is useful when records need to be retrieved based on multiple criterium.

    • Path: This is a logical connection to an alternate index, allowing applications to access the base cluster via the alternate index. 

    • Control Interval Access: This is a lower-level access type that deals with physical storage units in VSAM known as Control Intervals. It allows you to read or write to specific control intervals in a data set. It is often reserved for specialized, low-level data manipulation.

  • Data Set Type: This denotes what type of data set is being displayed. There are 4 data set types: KSDS(Key-Sequenced Data Set), ESDS(Entry-Sequenced Data Set), RRDS(Relative Record Data Set), and LDS(Linear Data Set). These types determine how data is organized and stored within the data set. If errors occur involving data set types, checking the logic of storage versus the type selected could resolve issues. Ex: if the logic of the program utilizes key words for data access but the type is set to ESDS errors will occur.

    • KSDS: KSDS records are stored and accessed based on a key field. The key values point to the corresponding data in records (i.e. an employee ID number points to the employee’s information like their name and title.)

    • ESDS: ESDS records are stored sequentially in the order they were entered. The records do not need keys since they are accessed by physical position or relative byte address. These types of records cannot be deleted; they can only be added and they cannot utilize alternate indexes. This type is suitable for things like logs or transaction histories.

    • RRDS: RRDS records are stored and accessed by a relative record number (RRN), which is the logical position of the record in the data set. The records are stored in fixed slot, like an array, and each slot is identified by the RRN. Records can be deleted and re-used when necessary, however no keys are used so alternate indexes are not supported. This method is best used in fixed length records or quick, direct access to records such as a catalog or a directory system.

    • LDS: This type is treated as a byte-steam with no logical structure. It is more frequently used for specialized applications such as DB2 tablespaces or user managed data sets that require raw data storage.

  • Processing Type: This refers to how data is processed based on its format, structure or intended use. It is often used in relation to database management systems, file handling, or access methods; it describes how operations like reading, writing, updating, or searching are carried out. Knowing processing types in a program can help troubleshoot abends or other issues. Checking the logic and the types, laid out under this heading, can help fix issues. The following list has logical relations to the previously discussed Data Set Types and Access Types. If the Processing Type doesn’t match the logic of the Data Set or Access Types then this could lead to errors and abends.

    • Key: Data is accessed based on a key value, often used in indexed data sets like KSDS. The system uses an index to locate specific records. Operations like “READ BY KEY” or “START” are common for this type.

    • NFK(Non-Keyed): Data is processed without relying on a key. Often used in sequential data sets, records are processed one at a time from the beginning. It is useful for batch operations or when key access isn’t needed.

    • DDN(Data Definition Name): Data is accessed through a DDNAME specified in the JCL. It is used in any dataset where the JCL specifies the DDNAME linking a program. This is common in batch processing where files are pre-defined in JCL.

    • NDF(Non-Defined File): Data is processed without being pre-defined in the system catalog, and is often used in temporary or dynamic datasets. This often means that the dataset is not pre-catalogued, instead it may be created or accessed dynamically during the execution of the job.

    • RR(Relative Record): Records are accessed using their relative record number, often found in VSAM relative record data sets. The records are identified by their position in the dataset and do not use keys.

    • ESDS(Entry-Sequenced Data Set): Records are accessed in the order that they were written often used in VSAM entry-sequence data sets. The records are processed sequentially or by relative byte address; no keys are used.

    • LDS(Linear Data Set): The data is treated as a byte stream without record boundaries, usually used in VSAM linear data sets for applications in DB2 or IMS. This usually means records don’t exist; data is treated as contiguous bytes that require custom handling.

    • FD(File-Defined): data is processed based on a file definition, often used in COBOL programs or datasets defined in application logic. This heavily depends on the file definition and aligns with how the program interacts with the file.

    • SEQ(Sequential): Data is processed in a predefined sequential order, often used in standard sequential datasets or tapes. The records are read or written in the order they appear without skipping or searching.

    • DB2 Cursor: Data is accessed row by row using a cursor in DB2. The program opens a cursor to fetch rows one at a time, often used for sequential or filtered data access.

    • DLI(Date Language Interface): Data is accessed using IMS hierarchical database methods in IMS applications. The data is processed based on the hierarchical structure of IMS databases.

    • BATCH: Data is usually processed in bulk, for large datasets. The program processes data in one or more predefined chunks, usually without user interaction.

    • Online: Data is processed in real time usually in response to user interactions, often times associated with CICS or other online transaction systems.

    • DIR(Direct Access): Data is accessed directly without a sequential scan often used in direct-access datasets like VSAM. The program accesses records by means of an RBA or a calculated address.

    • DISP(Disposition): Refers to how a dataset is handled after processing. This is, technically, not a Processing Type, rather it specifies whether the dataset is new, to be modified or already exists. This affects how the dataset is processed in a job. Often this is written in the JCL as new, old, or mod(modified).

      VSAM File Disposition (DISP) is a critical parameter used in JCL (Job Control Language) to define how a dataset, including VSAM datasets, is to be handled during and after a job step execution. Understanding DISP is essential in debugging ABENDs (abnormal ends) that involve VSAM files.

       Using DISP in Debugging ABENDs

       When debugging, consider these tips:
                    • Set DISP correctly to ensure datasets remain available:
                    • Example: DISP=(NEW,CATLG,KEEP) — even if job abends, the                                      VSAM file is preserved.
                    • Look at the abend code (e.g., S213, S222, 0C7, 0C4) — some are directly tied to file status (e.g., improper DISP causes S213).
                    • Check if the VSAM is allocated or in use — DISP=OLD with file in use will fail.
                    • Use IDCAMS to define or delete VSAM files properly before running job.

      BSAM(Basic Sequential Access Method): This is a low-level access method for sequential data often used in datasets that require direct control of I/O operations. The type of programs that use this type often manage I/O buffers and process directly while allowing for flexibility in specific use cases.

      QSAM(Queued Sequential Access Methos): This is a higher-level method for sequential data. It handles I/O buffering and sequencing automatically which reduces the programming complexity.

      BPAM(Basic Partitioned Access Method): Data is accessed from PDSs and is usually accessed by member names within the PDS.

      VSAM(Virtual Storage Access Method): Refers to VSAM datasets with specialized access methods like KSDS, ESDS, RRDS, and LDS datasets which were previously discussed.

  • Max Positions: This refers to the number of entries that exist within the record.  If the number of entries is known and this number does not match, it could help identify where an error in the code might be.

  • Max Strings: Strings are the concurrent I/O that a device is handling at one time, often associated with the configuration of DASD. If it is set too high, it could cause such extreme latency that the program could abend.

  • Key Position: Normally related to record-based data access or indexing, this refers to the location in a data record where a key field is stored. It is used to uniquely identify a record or provide a way to quickly locate and access that record. The number listed tells where in the data the key is (i.e. a value of 0 states that the key is at the beginning of the record.) Checking the data set type could help resolve an issue where key position is concerned. If the incorrect type is selected it would cause errors. It is also worth verifying, in the code, the location of the key. If that location is referenced incorrectly, it would also cause errors.

  • Key Length: This refers to the number of bytes that make up the key. If position and length both have value of 0 this indicates there is no key in the record.

  • File Errors: Specifically refers to the tracking and reporting mechanism for record errors. These are logged for diagnostic and troubleshooting purposes. A file error of “none” indicates that the program didn’t encounter any file-related issues during execution and often points to logic or memory errors. The following is a list of possible file errors.

    • File Not Found: The file being accessed does not exist at the specified location, either due to incorrect file name or path, or the file has been moved or deleted.

    • File Locking Conflicts: Another process or user has locked the file, preventing access. This often occurs in multi-user or multi-process environments, where file access is restricted to prevent data corruption.

    • Permission Issues: Insufficient file permissions to access or modify the file. This could include lack of read, write, or execute permissions based on the user's privileges.

    • Disk Space Errors: The storage device has run out of available space, preventing new records from being written to the file or the file from being expanded.

    • File Integrity Issues: File corruption or data integrity issues. This could occur due to hardware failures, unexpected power loss, or software bugs, causing the file's structure or contents to become inconsistent or unreadable.

    • End of File (EOF) Errors: The system encounters the end of the file prematurely, such as when trying to read beyond the last record in a file, causing an error during sequential or random access.

    • File Not Found: The file being accessed does not exist at the specified location, either due to incorrect file name or path, or the file has been moved or deleted.

    • File Locking Conflicts: Another process or user has locked the file, preventing access. This often occurs in multi-user or multi-process environments, where file access is restricted to prevent data corruption.

    • Permission Issues: Insufficient file permissions to access or modify the file. This could include lack of read, write, or execute permissions based on the user's privileges.

    • File Integrity Issues: File corruption or data integrity issues. This could occur due to hardware failures, unexpected power loss, or software bugs, causing the file's structure or contents to become inconsistent or unreadable.

    • End of File (EOF) Errors: The system encounters the end of the file prematurely, such as when trying to read beyond the last record in a file, causing an error during sequential or random access.

    • File Format or Structure Errors: The file is not in the expected format or does not adhere to the expected structure. This could involve mismatched record lengths, missing or extra fields, or incorrectly formatted key values.

    • Access Method Errors: Issues with the method used to access the file, such as incorrect indexing or mismatched record keys. For example, trying to access a record with a non-existent key in an indexed file like VSAM.

    • I/O Errors: Input/Output errors during file reading or writing. This could result from hardware failures (e.g., disk errors, tape drive issues) or software problems (e.g., incorrect programming or malfunctioning drivers).

    • Invalid Key or Key Length Errors: Problems related to incorrect key values, key positions, or key lengths that prevent records from being accessed or indexed properly. For example, attempting to access a key that doesn't exist or exceeds the defined key length.

    • Overflow or Underflow Errors: These errors can occur when data being written to the file exceeds the available space or is smaller than expected.

      • Overflow: Trying to insert a record that is larger than the file's predefined size.

      • Underflow: Encountering fewer records than expected during sequential read operations.

    • File Access Conflicts: Multiple processes attempting to simultaneously access or modify the same file, leading to conflicts such as race conditions, inconsistent data, or deadlock situations.

    • Buffer Overflow Errors: The system or application runs out of space in its internal buffers while reading or writing to the file. This often happens when the file is larger than the buffer size set for reading or writing operations.

    • Device-Specific Errors: Errors related to the hardware devices on which the file is stored. This could include: Hard disk or SSD failures, tape drive issues, or problems with network drives (e.g., disconnected or slow network).

    • File Descriptor or Handle Errors: Problems with file descriptors or handles, which are used by the operating system to reference open files. This could be caused by exceeding the maximum number of open file handles or incorrectly closing a file before access.

    • Record Locking or Concurrency Issues: Deadlocks or record-level locks preventing access to files due to multiple processes or users trying to access the same record simultaneously.

    • File System Errors: Problems with the file system itself (e.g., file system corruption), preventing access to files or causing reading/writing issues. This could also involve problems with file allocation tables (FAT) or directory structures.

    • File Corruption During Transfer: Errors that occur when transferring a file between systems, especially in the case of network failures, transmission errors, or interruptions during the file copy process.

    • Version or Compatibility Issues: The file format or version may not be compatible with the system or software trying to access it, causing errors in interpreting or processing the file.

  • Defined for Extended Addressability: Designated as either yes or no, this refers to a dataset that is allowed to exceed traditional addressing limits. With 32-bit addressing, datasets are usually restricted to 4GB. Extending the addressability leverages 64-bit addressing, permitting the datasets to be larger than 4GB. If datasets end up larger and are not assigned to have extended addressability, errors could occur so changing this status, if exceeding addressing, could help solve issues.

  • Opened for RLS(Record Level Sharing) Processing: This refers to a feature of VSAM that allows multiple systems, that are all in one sysplex, to access and update a dataset at the record level, rather than the dataset level. This allows for multiple users to read and update individual records concurrently. Enqueues and locking mechanisms are used to manage the records and prevent data corruption. If the dataset is not open for RLS, then this would not be the reason for bugs and should be ignored for troubleshooting purposes. If it is opened (or intended to be opened), then there are several reasons a bug could occur dealing with RLS. If the dataset is supposed to be open for RLS and it isn’t, then that status should be changed. It is worth checking contention and locks which could prevent updates to the specified record.

QSAM

  • DD Name: The Data Description name, which is the name of the file used in the program.  It forms a way to relate the file reference in the program with the Dataset in the JCL. There is potential for an error if the DD name is intended to be called but it was mistyped in the calling program.

  • Device: This is the hexadecimal identifier for the storage device and shows where the dataset is physically stored on DASD. If the correct device number is known and the displayed number is different, this could lead to issues. Also, checking that the DASD does, in fact, have enough storage to support the program could aid in troubleshooting. It is possible, too, that the DASD is completely offline and the device number could be blank or show 0000.

  • Disposition: This describes the status of a dataset to the systems and how the system should handle the dataset after the termination of a step or job. It can be specified to normal or abnormal termination (an abend.) This is controlled within the JCL of the job card. The types of possible disposition will be discussed below. In trouble shooting, it is worth checking which type of disposition is listed and ensure it is the type needed for the program. If the program is intended to abend then it is likely a disposition related to an abend is needed, though that is not always the case.

    • New: This specifies that a new dataset is to be created when the job is run. If the dataset already exists an error will occur. If this is the case, double check that the named dataset does not yet exist.

    • SHR: This specifies that the dataset already exists and is being used for reading. Other jobs can also access the dataset and use it in parallel. If the named dataset does not exist an error will occur and execution will fail. Should this error occur verify that the named dataset does not in fact, exist. If it does there could be an error with the name entered.

    • OLD: Similar to SHR, this indicates that the dataset already exists. However, with OLD, the step takes exclusive control of the dataset, so the dataset cannot be used by other jobs concurrently. Also, like in SHR, if the named dataset does not exist an error will occur, and execution will fail.

    • MOD: This specifies that the dataset exists and is going to be edited when the job is run. If the dataset does not already exist it will be created. Bear in mind, the information being added to/ created for the dataset should be sequential.

      Normal Termination: Specifies what action to take when the job completes successfully

      KEEP: Specifies that the dataset will be retained if the execution is successful. Retaining a dataset means the volume and unit need to be manually entered in order to access it later.

      CATLG: Says that the dataset will be catalogued if it is not already catalogued. Cataloguing a dataset allows it to continue to be discovered if it needs to be called again.

      UNCATLG: This will uncatalogue the dataset, which does not mean deletion but rather removes it from the user’s catalogue. It will then have the state of being “retained” which is discussed under KEEP.

      PASS: This indicates that the dataset, for this step, should be ignored and utilized in further steps.

      DELETE: Once execution is successful the dataset is deleted.

      Abnormal Termination: Specifies what action to take when the job does not complete successfully. This has all the same parameters as a normal termination with the exception of PASS.

  • DEB Address: The Data Extent Block (DEB) address points to the in-memory location of the DEB, which tracks a dataset’s extents (contiguous blocks of disk space allocated to a dataset). If a dataset outgrows its initial space, additional extents are added — up to 16 per volume. The DEB contains an extent list showing the start and end tracks of each extent, which helps identify allocation or I/O errors. When a dataset is uncatalogued (UNCATLG), it’s removed from the catalog but still exists on disk — becoming "retained." This can cause errors like S213 if a job tries to access the dataset without specifying its volume. An S001 abend might occur if a job reads past the last extent, while IEC030I indicates the dataset has filled all extents. To troubleshoot, use the DEB address to check extents and cross-reference the Data Control Block (DCB) to confirm attributes like block size (BLKSIZE) and organization (DSORG) match the dataset’s storage.

  • Volume Serial Number (VOL-SER): The VOL-SER identifies the specific volume where a dataset is stored. In a dump, it typically shows the disk or tape volume holding the data. If the VOL-SER is blank, it could mean the dataset is cataloged (so the system pulls volume info from the catalog), part of a multi-volume dataset, or a temporary dataset. An empty VOL-SER can also indicate an uncatalogued dataset without a volume specified in JCL, which may result in an S213 abend if the system can't locate the dataset.

  • Unit Address: The Unit Address shows the physical or logical device where the dataset resides, such as a disk (DASD) or tape drive. In a dump, this usually corresponds to the device's address or identifier. If the Unit Address is blank, it often means the dataset is cataloged, so the system dynamically allocates a device at runtime. It can also be empty if the dataset is temporary or if there was a failure in allocation — for example, if an uncatalogued dataset didn’t have a volume or unit specified in JCL, potentially leading to an S213 abend.

  • Describes how records are written to a dataset, typically only pairing two at a time. The mode should display the correct intention of the program. If the program intention is different than what the mode displays that could be a possible explanation for an abend. The following lists all potential modes.

    • Get: Reads a record in a basic read access

    • Put: Writes a record

    • Get Locate: Read a record from a manually inputted address location

    • Put Locate: Write a record to a manually inputted address location

    • Mod: Append a record at the end of the data set

  • DSORG(Dataset Organization): This describes how a dataset is organized, physically and logically. If the intended organization of the dataset does not match with the displayed DSORG this could be a potential cause of an abend. The following are the different types of DSORGs. PS and PO are the most common types by a wide margin.

    • PS(Physical Sequential): A purely sequential file, read line-by-line.

    • PO(Partitioned Organization): Like a folder with many files inside of it such as a library or directory.

    • DA(Direct Access): A file, or module, that is specifically accessed.

    • RECFM(Record Format): This describes how each record in a dataset is structured. It is possible  the actual format of the records is different than what is displayed so this could be the cause of an abend. The following is a list of different RECFMs.

    • F(Fixed): Indicates that the length of all records is the same.

    • V(Variable): Every record is of varying lengths.

    • FB(Fixed Block): Multiple fixed sized records are combined into a block. Blocking is further discussed in the “BLKSIZE” section.

    • VB(Variable Block): Multiple varied sized records are combined into a block.

    • U(Undefined): The record size is unknown, often used in load modules.

    • A(As A): A control character that is mostly used for printing. This would indicate that the file is intended to be printed. Inside of the file the character will vary dictating instructions for   printing. Insofar as RECFM is concerned, this merely indicates that there are instructions in the record for printing. It is usually found after one of the other abbreviations (e.g. FBA.)

  • LRECL(Logical Record Length): Denotes the number of bytes long a record is or is allowed to be. For example, if the record is known to be or should be 80 bytes, but this displays 85 then this could be the cause of an abend.

  • BLKSIZE(Block Size): Denotes how many bytes in a block each I/O operation reads/writes. Blocking is a method of programming that lets the I/O operation read and/or write in large chunks rather than one by one. This increases efficiency. If the BLKSIZE is larger than anticipated or larger than the processor can handle this could lead to an abend.

  • DCB Address: The DCB (Data Control Block) address points to the control block used to manage a dataset during I/O operations. It contains key information such as dataset organization, record format, and blocking. If a job fails due to dataset access issues, the DCB address can help locate which file was involved. Reviewing the DCB can reveal incorrect attributes or inconsistencies. It's useful for verifying that a program is referencing the correct dataset setup.

  • EXCP Count: EXCP (Execute Channel Program) Count tracks how many low-level I/O requests were issued for a dataset. A high count can indicate inefficient processing, such as reading one record at a time or unnecessary looping. It’s a key indicator of I/O performance and can explain why a job is slow. When EXCP count differs greatly from similar jobs, it often points to flawed logic or suboptimal buffering. Monitoring this metric can help improve program efficiency.

  • Pathname: This is the dataset or file name being referenced at the time of the abend. It’s crucial for identifying which resource was involved in the error. If the path is unexpected, it may reveal a misallocated DD statement or JCL error. Pathname helps quickly identify which dataset to investigate. It also aids in verifying whether the job accessed the intended file.

  • File Type: The File Type identifies the nature of the dataset, such as PS (sequential), PO (partitioned), or VSAM. This affects how data is accessed and how errors manifest. For instance, issues with member access are only relevant to PO datasets. Recognizing the file type helps narrow down the range of possible I/O problems. It’s often one of the first clues in diagnosing dataset behavior.

  • Record Count: This indicates how many records are present in or processed from the dataset. It’s useful for checking if all data was read or if processing stopped prematurely. If the count is lower than expected, the program may have terminated early or skipped records. A higher-than-expected count might suggest duplication or looping. It's a key metric for validating completeness.

  • Line Printer Count: Tracks how many lines were sent to a line printer or equivalent output device. An unusually high number can point to infinite loops or verbose debugging output. Conversely, a low count might indicate early termination or suppressed output. It's a quick way to check job output behavior. This can help explain output-related abends or print overflows.

  • Byte Count: Shows the total bytes read from or written to a dataset. This is helpful in estimating the volume of data handled and verifying against file size expectations. If the byte count is unusually low, it may suggest an incomplete read. A high byte count can raise performance or memory use concerns. It provides a sanity check for data volume.

  • Line Record Outlim: Indicates the maximum number of output records or lines before a cutoff is applied. It helps prevent runaway jobs from flooding printers or output files. When output appears truncated, this value is the first to check. If exceeded, it can explain missing or cut-off print output. It’s often adjusted during report tuning or batch job setup.

  • Page Printer Pages: Reflects how many pages were generated for a page-oriented printer. If the job aborts mid-print, this value shows how far it got. A spike in this number might indicate a looping report or excessive page breaks. It can also help estimate printing resource usage. Useful for jobs producing paged reports.

  • Pages/Segment: Specifies how many pages make up a report segment. If segments are misaligned or out of order, this number can help trace structural issues. It's mostly used in formatting and output diagnostics. Useful when dealing with long reports that need logical separation. A mismatch here can cause pagination errors.

  • Segment ID: Identifies the segment of output or data in a multi-part structure. Useful for tracking which part of a job failed or produced incorrect output. If only one segment is wrong, this ID pinpoints it. Can aid in debugging modular or multi-section jobs. Helps isolate faulty report or data segments.

  • Number of Buffers: Shows how many I/O buffers were allocated for data transfer. Too few buffers cause I/O waits and slow performance, too many waste memory. Buffer issues often explain poor throughput in high-volume jobs. It’s a key lever in performance tuning. Buffer configuration should align with dataset size and access pattern.

  • CI Splits: Control Interval splits occur in VSAM files when a CI becomes full during insertions. This can slow down access and fragment the dataset. Frequent CI splits indicate poor initial space allocation. Monitoring this helps optimize VSAM performance. Reorganization may be needed if this count is high.

  • CA Splits: Control Area splits occur when an entire CI set cannot accommodate a new CI, forcing a new CA. These are costlier than CI splits and can severely degrade performance. High CA split counts suggest that secondary allocations are too small. This is a common cause of slowdowns in KSDS datasets. Indicates a need for tuning or file redefinition.

  • Extents: Indicates how many physical extents a dataset occupies on disk. More extents mean fragmentation, which can hurt performance. If the number is unusually high, the dataset may need to be reorganized or reallocated. Large extent counts can also cause allocation failures. This field is vital when troubleshooting space-related abends.

  • Max Record Length: Shows the longest record in the dataset. Helps ensure that buffer sizes and record processing logic are sufficient. If records are being truncated or skipped, this value should be checked. It helps validate input assumptions in the application. Crucial for diagnosing data read/write errors.

  • Number of Records: Total number of records processed or present in the dataset. Can be used to verify that the job ran as expected. Discrepancies suggest looping, data corruption, or early job termination. A core metric for batch job diagnostics. Often compared to prior runs to detect anomalies.

  • Records Deleted: Tracks how many records were removed from the dataset. Helps confirm whether deletions occurred as expected. If this value is zero when deletes were intended, it could indicate logic errors. Sudden spikes might suggest accidental mass deletes. Key for update job verification.

  • Records Inserted: Shows how many new records were added to the file. Helps confirm the job's success in growing the dataset. A low number can suggest rejected input, while a high one might indicate duplication. Useful for tracking data expansion. Important for tuning insert-heavy applications.

  • Records Retrieved:  Indicates how many records were read from the file. Higher than expected counts can point to full-table scans or looping logic. If low, might suggest incomplete processing. This value is essential for verifying program logic. Helps correlate I/O volume with job behavior.

  • Records Updated: Reflects the number of records that were modified in place. Helps confirm job success in applying updates. If zero, an update step may have failed silently. High update rates can be a performance concern. Useful for tracking and validating change activity.

  • Available Bytes: Represents how much free space remains in the file or buffer. If very low, the job may fail soon with a space-related abend. Helps preemptively identify storage shortages. Sudden drops in availability can indicate runaway data. Useful for capacity planning and error prevention.

    High Allocated RBA: The highest Relative Byte Address allocated in a VSAM file. It shows the logical boundary of the dataset. If close to limits, a space allocation issue may be imminent. Comparing it with High Used RBA shows utilization efficiency. Important for detecting space saturation.

    High Used RBA: Indicates the furthest byte in the file that was actually written to. Useful for tracking real usage versus allocated space. If this is close to the High Allocated RBA, the file is nearly full. Can help detect file growth trends or misestimated space needs. Often a leading indicator of upcoming space-related failures.

    Number of Levels: Refers to the levels in a B-tree structure for indexed VSAM files. More levels mean longer access paths, which slow down reads. A sudden increase can mean uncontrolled growth. Useful for performance diagnosis in keyed datasets. Helps determine if reorganization or index tuning is needed.

The following figure shows a sample File section for an Abend-AID report for a JES2 file.

File Section for a JES2 File

aacrptov00199.jpg

The following figure  shows a sample File section for a VSAM file.

File Section for a VSAM File

aacrptov00201.jpg

The following figure shows a sample File section for a QSAM file.

File Section for a QSAM File

aacrptov00203.jpg

Important

DEB, IOB, and UCB information is available for a QSAM file, but is not shown in this figure.

The following figure shows a sample File section for an IAM file.

File Section for a IAM File

aacrptov00205.jpg

Db2 Section

See Abend-AID for Db2 for a complete description.

IMS Section

See Abend-AID for IMS for a complete description.

IDMS Section

See Abend-AID for IDMS for a complete description.

Current Sort Record Section

The Current Sort Record section is available only when the program uses an internal COBOL sort. This section identifies:

  • Sort fields. For example: SORT FIELDS=(0001,008,ZD,A)
  • Record type and record length. For example: Record Type=F,Length=(00080)
  • Concatenated sort key
  • Current sort record in vertical-hexadecimal format.

The following figure shows a sample Current Sort Record section for a COBOL program.

Current Sort Record Section

aacrptov00207.jpg

Abend-AID for WebSphere MQ Section

Displays either WebSphere MQ batch or WebSphere MQ IMS information created by Abend-AID for WebSphere MQ. For more information about viewing Abend-AID for WebSphere MQ report information, see Understanding WebSphere MQ Diagnostics.

LE Section

Abend-AID collects and interprets the runtime options from LE’s options control block. This information is available from Abend-AID’s Language Environment Selection List.

LE information identifying LE run-time options (OCB)

aacrptov00209.jpg

LE Control Blocks

Abend-AID displays significant control blocks from the LE runtime environment. As an example, the following figure shows the Common Anchor Area.

LE Section – Common Anchor Area

aacrptov00211.jpg

LE Heap Storage

Abend-AID validates and analyzes LE Heap Storage. If damage to the heap storage structure is detected, Abend-AID provides diagnostic information to assist in problem determination and resolution. This includes valuable storage displays to help determine the cause of the original error.

The following figure shows the LE Heap Storage Report section when damage to the heap storage structure is detected.

LE Section – Heap Storage – with errors

aacrptov00213.jpg

The following figure shows the LE Heap Storage Report section when no errors were detected in Heap Storage.

LE Section – no errors

aacrptov00215.jpg

Epilog Section

The Epilog section, at the end of a report, summarizes Abend-AID’s action.

  • How the report was printed, if applicable:
    • From batch (as a SYSOUT of your JOB)
    • From an Abend-AID report data set, identifying name and number.
  • The vertical and horizontal hexadecimal translation tables.
  • Whether a system dump was printed or suppressed.
  • The following warning messages when applicable:

The IBM COBOL Load List for this application appears to be corrupted.

The system save area chain for this application appears to be broken.

The COBOL environment appears to be corrupted.

Working Storage Display was suppressed for one or more CSECTs by CSECTBYP.

CWINCLUD processing was suppressed because the CWINCLUD table is incompatible with this release of Abend-AID. Contact your Abend-AID installer to run JCLINCLD.

The SYSOUT data set at dump capture was not LRECL 133.
Therefore, Abend-AID is in narrow format.

Report Routing Information

  • Abend-AID’s memory utilization statistics. Note that the Above, Below, and Total figures refer to the largest amount ever in use for each category. The amounts shown in the Total lines will tend to, but not necessarily, agree with the totals of the Above and Below detail lines.
  • How Abend-AID was called.

Epilog Section

aacrptov00217.jpg

Epilog Section with Warning

aacrptov00219.jpg

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*