Using PACLOG options
PACLOG allows you to copy and process original cataloged archive log data sets in a variety of ways.
Using PACLOG syntax options, you can perform the following actions:
Decide which original archive log data sets to copy and process by using the search limits provided with PACLOG
Decide whether to make one, two, three, or four copies of each original archive log data set
PACLOG automatically catalogs all copies it makes.
Separately specify the output medium (tape or disk) for each copy
Separately specify which log record types and objects to remove for each copy
Reprocess a log that has already been processed by PACLOG
Compress the log data written to tape or disk
Simulate archive log data set processing using PACLOG syntax options
For more information, see Using PACLOG to simulate log processing.
Deciding which logs to process
When you use PACLOG for the first time, you may not want to process all of the archive logs that are recorded in the BSDS.
You may want to process only a recent subset of these log data sets. Use the LIMIT option to limit the log data sets (from the most recent to the least recent) to be considered for processing.
Select the log data sets to be processed by specifying one of the following options:
The number of log data sets
A time in hours
An RBA range
All log data sets processed in the normal (nonsimulation) mode are recorded in the archive history file so that PACLOG can determine whether a particular original log data set has been processed previously.
After all archive logs in the BSDS have been processed and recorded in the archive history file, you do not normally need to continue to use the LIMIT. For more information and examples of the use of the LIMIT option, see Global option descriptions and Examples of PACLOG JCL.
Log data sets processed in simulation mode are optionally recorded in the archive history file. For information about the SIMULATE option, see Global option descriptions.
Deciding which copies to make and when
PACLOG lets you make one, two, three, or four processed copies of each archive log data set.
Copy 1 is intended for use as a local site primary copy to replace the original (unprocessed) primary archive log data set. Where dual archive logging is in use, copy 2 is intended for use as a local site backup to replace the corresponding original (unprocessed) backup archive log. Copies 3 and 4 are intended for offsite use (recovery site primary and backup).
PACLOG provides considerable flexibility in the processing of all archive log copies. You can process one, two, three, or four copies at one time in the same job, with one exception: copy 2 can be processed only if you use dual archive logging in your Db2 subsystem. You can make filtering, compression, and medium selections separately for each copy.
For important information regarding archive logs, see Recommended archive log processing practices.
A typical approach to archive log processing might be to create all four copies at the same time using the maximum filtering. This approach is provided by filtering defaults and identifying any objects that never need recovery. Copy 1 might be compressed and directed to disk using the defaults, and copies 2, 3, and 4 might be directed to tape. For more information about specifying each copy, see Archive copy option descriptions through Disk option descriptions.
The syntax for this approach is as follows:
ARCHIVE1 FILTERTS (DBASE.TSPACE1) ARCHIVE2 FILTERTS (DBASE.TSPACE1) TAPE UNIT CART ARCHIVE3 PREFIX OFFSITE.DB2A.ARC1 TAPE UNIT CART FILTERTS (DBASE.TSPACE1) ARCHIVE4 PREFIX OFFSITE.DB2A.ARC2 TAPE UNIT CART FILTERTS (DBASE.TSPACE1)
Variations on this scenario include the following:
If the number of tape drives required to process all copies together is a concern, you could first process copies 1 and 2 as described previously and then later process copies 3 and 4. This approach results in copies 3 and 4 inheriting any filtering that was performed on copy 1 when it was processed.
Consider scheduling the job to process copies 3 and 4 to match the movement of offsite resources. For more information see the . This approach has the added advantage of potentially improving stacking efficiency because more than one log data set can be written to the tape.
If you anticipate that log data might be required for uses other than recovery (such as audit reporting), then copy 2 might be made with no filtering and directed to tape. However, if you plan to follow this approach, you must create copy 2 when you first process copy 1.
Choosing the output medium
With the exception of copy 1, you can choose either disk or tape as the output medium independently for each copy of the processed log.
For copy 1, you can only use disk as the medium. If you do not specify an output medium, PACLOG uses disk as the default. Writing copy 1 to disk may substantially reduce the resources required by the active logs. Read the online overview included with the PACLOG logging environment modeling tool (described in Using the PACLOG logging environment modeling tool). For information about specifying the output medium, see Device type options.
You can write copy 2 of the processed log to tape and optionally specify that the log data sets be stacked to the tape. Copies 3 and 4 (the offsite copies) can also be stacked to tape for transport to a recovery site. If you require compression of the processed copies, you specify it differently for tape and disk. For more information, see Compressing archive log records.
To guard against the inadvertent removal of records that you might need for purposes other than recovery, consider using different filter options for each of your processed copies.
Removing unwanted records and objects
PACLOG provides filter options that remove certain record types and objects from the archive log.
In general, these records are not needed for Db2 forward recovery processes. They are initially written to the log to allow update data backout in the event that a transaction does not complete successfully. The following table shows the filter options available in PACLOG. For more information about Db2 log record types, see the IBM Db2 administration guide.
Use this option to replace index-related records that are not needed for Db2 recovery purposes with easily compressible data.
FILTERIX EXCEPT ( ixSpaceList)
Use this option to exclude selected index spaces from filtering, thus allowing index recovery from log data.
Use this option to replace records that are not needed for Db2 recovery processes (other than those removed by FILTERIX) with easily compressible data.
FILTERTS ( tsSpaceList)
If you use this option and you attempt to recover a table space that requires the Db2 log, the recovery fails.
For more information about PACLOG filter options, see Filter option descriptions.
Reprocessing a processed local site archive log
You can use the REDO option to tell PACLOG to reprocess a local site archive log data set that has already been processed and cataloged.
The new log data set replaces the prior version and updates the archive history file. You should use the LIMIT option in conjunction with the REDO option to control which log data sets are reprocessed.
If the current syntax specifies less filtering than was previously performed, the log data set is still processed but a warning message is issued.
You can use the REDO option in situations such as these:
Because PACLOG does not filter uncommitted transaction data, original log data sets that contain uncommitted data may be insufficiently processed the first time. After the data is committed, a processed log data set can be reprocessed if the REDO option is used. The reprocessed log data set replaces the prior version.
If you process an archive log data set in simulation mode and update the BMC Software archive history file (by using the UPDATEHIST keyword with the SIMULATE option), and then you want to process the same archive log data set in normal mode, you must use the REDO option. This action tells PACLOG that an entry already exists in the archive history file and that it needs to be updated when normal mode processing is complete.
You want to perform additional filtering on one or more archive log data sets.
You want to move one or more archive log data sets from one medium to another (for example, from tape to disk).
For examples that show how to use REDO and LIMIT, see Examples of PACLOG JCL.
The REDO option is valid for only copies 1 and 2 of the original archive log data set. You cannot use the REDO option with copies 3 and 4 (the offsite copies).
Compressing archive log records
You can optionally compress the filtered archive log data produced by PACLOG to reduce the amount of tape or disk space required for storage.
Compressing data to tape
When you write the processed log data sets to tape, you can choose to use the hardware compression feature in the tape drive.
The PACLOG tape option TRTCH provides this choice. For TRTCH syntax, see Tape option descriptions.
Compressing data to disk
When you write the processed log data sets to disk, you can choose to compress the data by using the COMPRESS option.
COMPRESS provides a high degree of data compression and uses the BMC Software XCA technology. COMPRESS syntax is described on Disk option descriptions.
If you already have DASD hardware compression enabled, do not use the COMPRESS option. If you need to determine whether you have DASD hardware compression installed on your system, contact your DASD hardware provider.