OPTION options descriptions
This topic contains detailed descriptions of each OPTION statement option.
REPOS
Determines whether Log Master updates the Repository during the current job step.
Value | Description |
---|---|
YES | (Default) Log Master updates the Repository. |
NO | Log Master does not update the Repository. If you specify NO, you cannot specify the REPOS UPDATE, REPOS DELETE, or ONGOING keywords on the LOGSCAN statement. |
DATEFMT
Determines the format that Log Master uses to display date and time data on all reports. Log Master supports the following date and time formats. The default value is ISO.
USA
MM/DD/YYYY/HH:MM:SS.nnnnnnEUR
DD.MM.YYYY.HH.MM.SS.nnnnnnISO
YYYY-MM-DD-HH.MM.SS.nnnnnnJIS
YYYY-MM-DD-HH:MM:SS.nnnnnn
When running on Db2 Version 10 and later, Log Master supports precision timestamps up to 12 digits, and inclusion of a time zone in the timestamp (YYYY-MM-DD-HH:MM:SS.nnnnnnnnnnnn±HH:MM).
Log Master does not update the Repository. If you specify NO, you cannot specify the REPOS UPDATE or REPOS DELETE keywords on the LOGSCAN statement.
EXECUTION MODE
Specifies the execution mode for Log Master.
- CURRENT—(Default) Directs Log Master to run in current mode. In this mode, Log Master selects log records relating to the objects that you specify, but only if the objects currently exist in the Db2 catalog.
- OVERTIME—Directs Log Master to run in overtime mode. In this mode, Log Master selects log records relating to all of the objects that you specify, regardless of whether the objects currently exist in the Db2 catalog. For more information about overtime mode, see Overtime mode, or see the section about objects over time in the Processing-objects-over-time.
To update the Repository, you must select overtime mode. If you specify more than one logical log control file as input (directly or by specifying a GDG base), Log Master automatically runs in overtime mode. The ATTEMPT COMPLETION keyword enables Log Master to plan for and use image copies during row completion processing in overtime mode. By default, when Log Master runs in overtime mode it does not perform row completion on any log records associated with objects that do not exist in the Db2 catalog.
To complete log records for objects that do not exist in the Db2 catalog, take two actions:
- Specify this keyword and include an IMAGECOPY statement (to specify the names of image copy data sets that contain the desired objects). For more information about the IMAGECOPY statement, see IMAGECOPY-statement.
Depending on the objects you select and the activity related to those objects, you might need to specify multiple image copy data sets. These actions increase the chances of successful row completion processing.
For more information, see the EXECMODE=CURRENT entry in Installation option descriptions.
SUBSYSTEM RBA RESET DATE(date) TIME(time)
Use this keyword to provide Log Master with information to help discern which log records are valid for the objects implicated by the log scans. Specify this keyword when the following conditions exist:
- You used the IBM procedure for resetting the log RBA to low values in a non-data sharing environment (cold start with STARTRBA=ENDRBA=0).
- The most recently such completed cold start record has rolled off the conditional restart queue.
- Following the reset, the subsystem RBA exceeds the create RBA of one or more of the implicated tables that were created before the reset.
Specify a date and time that closely approximates the timestamp of that cold start.
FILTERREL
Specifies the relational operator that Log Master uses to connect multiple filters.
Value | Description |
---|---|
AND | (Default) Directs Log Master to connect filters with an AND relational operator. |
OR | Directs Log Master to connect filters with an OR relational operator. |
FILTER METHOD
Determines how and when Log Master obtains Db2 catalog information (DBIDs, OBIDs, or PSIDs) for the Db2 objects that are named in a filter. Log Master reads the Db2 catalog to resolve the names of Db2 objects into numeric identifiers. Log Master can read the catalog either once during the initial analyze phase of processing for all objects, or repeatedly as it encounters each object in the Db2 log.
Value | Description |
---|---|
STATIC | (Default) Directs Log Master to obtain Db2 catalog information during the analyze phase of processing. Log Master obtains information for all Db2 objects that the filter explicitly names. |
DYNAMIC | Directs Log Master to obtain Db2 catalog information dynamically as it scans the Db2 log. Log Master obtains information for the Db2 objects that are present in the scanned log records and selects log records for only the objects that are named in the filter. |
Consider the following performance implications as you choose a value:
- When you choose STATIC, Log Master can experience degraded performance during the analyze phase when all of the following conditions exist:
- Your Db2 subsystem contains a very large number of objects (for example, some enterprise resource planning applications generate tens of thousands of objects).
- Your filter uses a LIKE or NOT LIKE operator (for example, TABLE NAME LIKE OWNER.%).
- The number of objects that are actually updated during the range of the log scan is significantly smaller than the number of objects that are named by the filter.
- When you choose DYNAMIC, Log Master can incur extra processing overhead during the log scan as it regenerates the filter each time it encounters a new Db2 object.
If you select dynamic filtering (DYNAMIC), do not specify the following items. The processing required for these items is not compatible with dynamic filtering.
- The GENERATE EMPTY FILES keyword for load file output.
- A value of YES for the USELGRNG keyword of the OPTION statement.
- The LASTQUIESCE keyword of the scan range definition for REDO SQL output.
For more information, see FLTRMTHD=STATIC entry in Installation option descriptions.
IMAGESOURCE
Specifies the source that Log Master uses to perform row completion processing. If you do not enter a value for this keyword, Log Master uses the value of the IMAGESRC installation option.
Valid values are as follows:
Value | Description |
---|---|
ANY | Perform row completion from any source available, including the Db2 log, the table space itself, or an image copy. When you select this value, Log Master uses the values of the FILECOST keyword to select a row completion source. |
TABLESPACE | Perform row completion from the table space only. This value increases the risk that Log Master can terminate with either BMC097386 'unable to decompress,' or one of several 'unable to complete' error messages. |
SYSCOPY | Perform row completion using only resources from the SYSIBM.SYSCOPY catalog table (including image copies or other events such as LOAD LOG YES actions). For more information on the types of image copies that Log Master can read, see Reading image copies. |
LOGONLY | Perform row completion from the Db2 log only. |
Log Master performs row completion processing to rebuild a complete image of a table row at a given point in time. Unless a table is defined with the Data Capture Changes (DCC) attribute, the log record of an update action usually contains only part of the table row (enough to include the changed data). Log Master uses the record ID (RID) value in the log record to obtain information about the row from other sources. For more information about row completion processing, see Row-completion-processing-and-your-jobs .
For more information, see IMAGESRC=ANY entry in Installation option descriptions.
Reading image copies—Log Master uses image copies for row completion processing, or to obtain compression dictionaries. Be aware of the following points regarding how Log Master uses image copies:
When Log Master requires image copies, it attempts to read image copies in the following order:
- For local sites, default order is (FC, LP, LB, RP, RB).
- For remote sites, default order is (FC, RP, RB, LP, LB).
You can change these defaults using syntax overrides or by changing installation option values. For more information, see the following sections:
- Log Master can read Instant Snapshot image copies that were created on intelligent hardware storage devices by the BMC AMI Copy for Db2 product with SNAPSHOT UPGRADE FEATURE.
- Log Master can read encrypted image copies created by BMC AMI Copy if the name of the key data set is provided by using the KEYDSNAM installation option.
- Log Master can read cabinet copies created by BMC AMI Copy. Cabinet copies contain a group of table spaces and indexes within a single cabinet file to provide performance improvements when managing large numbers of small table spaces.
- To read Instant Snapshot, encrypted, or cabinet image copies, both Log Master and BMC AMI Copy must use the same instance of the BMC Software BMC_BMCXCOPY table.
- Log Master cannot read Data Facility Storage Management System (DFSMS) concurrent image copies, regardless of how they were created (by using the CONCURRENT keyword of a Db2 Copy utility, or by using DFSMS outside of Db2). If the only available source for row completion processing or a dictionary is a concurrent image copy, Log Master might encounter errors or terminate abnormally.
- Consider running regular jobs to update the Log Master Repository with copies of compression dictionaries. This action can improve overall performance by enabling Log Master to avoid mounting image copies to retrieve dictionaries to process log records of compressed table spaces.
FILECOST
Assigns a relative cost to the act of reading a separate file, mounting a tape, or reading a segmented table space. Log Master uses the FILECOST values only if the IMAGESOURCE keyword is set to ANY.
Log Master uses the cost values as it chooses a source for row completion processing. Use this syntax to adapt Log Master row completion processing to your environment. To enter cost values, express them in terms of the cost of reading one page from the Db2 log.
Value | Description |
---|---|
costFile | Assigns a relative cost to processing an additional data set, expressed in terms of the cost to read one page from the Db2 log. In many cases, the data set contains image copy information. To increase the probability that Log Master uses image copies for row completion, enter a lower value. The default value is 2,000b To improve performance, as Log Master chooses between available image copy data sets, it assigns a lower costFile value to any Instant Snapshot image copies that are available. Instant Snapshot image copies are created on intelligent hardware storage devices by the BMC AMI Copy for Db2 product with SNAPSHOT UPGRADE FEATURE. For more information, see the CSTFILE=2000 entry in Installation option descriptions. |
costMount | Assigns a relative cost to performing a single tape mount, expressed in terms of the cost to read one page from the Db2 log. The default value is 25,000. For more information, see the CSTMOUNT=25000 entry in Installation option descriptions. |
costSeg | Assigns a relative cost to obtaining information from a segmented or universal table space, expressed in terms of the cost to read one page from the Db2 log. The default value is 2,000,000,000. For more information about the risks of row completion failure with mass delete actions in these types of table spaces, see the CSTSEG=2000000000 entry in Installation option descriptions. |
MINLOGPT
Specifies how Log Master determines the log scan end point when it runs in a data sharing environment. Specify YES to obtain the same end point across all members of the data sharing group.
Value | Description |
---|---|
NO | (Default) Indicates that Log Master does not require a common end point for all members. Log Master uses the end point that you specify. When you run an ongoing log scan in a data sharing environment, Log Master dynamically sets the value of this keyword to YES, regardless of the value that you specify. |
YES | Indicates that Log Master requires a common end point for all members of the data sharing group. A common end point is important for ongoing log scans, where Log Master requires common end points and start points to select all of the desired log records across multiple runs. |
If you do not enter a value for MINLOGPT, Log Master uses the value of the corresponding installation option.
For more information, see the MINLOGPT=NO entry in Installation option descriptions.
USELGRNG
Determines whether Log Master uses the SYSIBM.SYSLGRNX table in the Db2 directory to determine the ranges for a log scan. Use this keyword only in a data sharing environment.
Log Master reads the SYSLGRNX table only when a WHERE clause or filter refers to specific Db2 objects (columns, tables, table spaces, or databases). Log Master uses this table to determine whether the Db2 log of a data sharing member contains information about the database structures that are defined in your log scan. With this information, Log Master can avoid reading log files of members that show no activity during the initial log scan (before row completion processing). This action can improve overall performance. If you do not enter a value for USELGRNG, Log Master uses the value of the corresponding installation option.
Value | Description |
---|---|
YES | Indicates that Log Master uses the SYSLGRNX table to determine valid ranges. |
NO | Indicates that Log Master does not use SYSLGRNX to determine valid ranges. |
Specifying YES can improve Log Master performance. However, Log Master can experience degraded performance reading the SYSLGRNX table if that table is not maintained (with a Db2 Modify utility). Use the elapsed time value provided by message BMC097168 to determine the performance of Log Master SYSLGRNX processing. To improve performance in this situation, specify NO.
For more information, see the USELGRNG=NO entry in Installation option descriptions.
PROCESS PITS
Specifies that, as Log Master scans the log, it will include any log records that fall within the range of a Point-in-Time (PIT) recovery. By default, Log Master does not select log records within a PIT range, regardless of your WHERE clause or filter. In rare situations, you might need log records from within a PIT range.
A PIT recovery is a partial recovery (performed with a Db2 Recover utility) that restores a set of Db2 objects to their state at a previous point in time. After a PIT recovery is performed on a set of objects, the Db2 log contains a range of log records for those objects that are no longer valid (because the objects were recovered to a point before the log records were created). This range of invalid log records is called a PIT range. Information about PIT ranges is stored in the SYSIBM.SYSCOPY table of the Db2 catalog.
DICTIONARYSPACE value
Limits the amount of memory that Log Master uses to store compression dictionaries during processing.
Adjusting the value can change the performance of log scans that read compressed table spaces. By default, an unlimited amount of memory is available for storing compression dictionaries, and we recommend using retaining the default. However, if you change the DICTIONARYSPACE value to limit the memory, the following considerations apply:
- The minimum value is 192 KB (the size of three dictionaries). If you enter a value that is less than the minimum, Log Master ignores your specified value and uses the minimum value.
- Allocation amounts are site specific. Perform the following steps to calculate a DICTIONARYSPACE value:
- For each compressed table space that a job uses, multiply the number of partitions by the page size (for example, 32K).
- Multiply that value by 16.
- Add the values for all compressed table spaces that a job uses.
- Log Master uses this memory dynamically, loading compression dictionaries, as required. When it reaches the DICTIONARYSPACE limit, it discards the least recently used dictionaries. With a low DICTIONARYSPACE value and large numbers of compressed table spaces, Log Master can load the same dictionary more than once, resulting in degraded performance.
- Specify allocation amounts in kilobytes (using the suffix K) or megabytes (using the suffix M).
- Log Master honors the DICTIONARYSPACE limit during most processing, but can exceed the limit when reading a compression dictionary from the Db2 log or from an image copy during completion processing.
For information about estimating overall memory, see Estimating-overall-memory-REGION.
MOVE TABLE
(SPE2104)
Performs analysis and processing related to ALTER TABLESPACE MOVE TABLE functionality introduced with Db2 12 FL508
Value | Description |
---|---|
YES | Extracts Db2 catalog and Log Master repository information for tables that have been moved and provides processing support for log records associated with the table before the move |
NO | (Default) Bypasses analysis and processing of data associated with tables before the table space move |
For more information, see the MOVETABLE=NO entry in Installation option descriptions.
AUTOLOB
Determines whether Log Master should automatically include LOBs for the generation of DDL output.
Value | Description |
---|---|
YES | Indicates that Log Master should automatically include LOBs when generating DDL output, which also includes DDL output generated for SQL with INCLUDE DDL and Drop Recovery |
NO | Indicates that Log Master does not automatically include LOBs when generating DDL output, which also includes DDL output generated for SQL with INCLUDE DDL and Drop Recovery. Some catalog activity and catalog objects types that require LOBs for DDL generation. If these activity and objects types are not required for generated output (for example, WHERE CATALOG ACTIVITY IN (GRANT, REVOKE DDL), then you can use AUTOLOB NO. |
The following Catalog activity types require LOBs:
- CREATE
- DROP (undo)
- ALTER
The following Catalog object types require LOBs:
- VIEW
- TRIGGER
- PROCEDURE
- FUNCTION
- TABLESPACE (Pending alters)
- TABLE (Pending alters)
- INDEX (Pending alters)
For more information, see the AUTOLOB=YES entry in Installation option descriptions.
LOAD NOCOPYPEND
Specifies how Log Master processes each LOAD RESUME LOG NO event when establishing compression ranges and assigning dictionary sources to each. The default of LOADNCP is NO, if the installation option is not specified or not changed to YES or if the LOAD NOCOPYPEND syntax is not specified. However, if LOAD NOCOPYPEND is specified in the OPTION statement without YES/NO that follows, the default is YES in that case, consistent with all other YES/NO options.
Value | Description |
---|---|
YES | Directs Log Master to assume that every LOAD RESUME LOG NO event that is used to process the specified scan range ran with NOCOPYPEND. As a result, Log Master should assume that the compression dictionary was not rebuilt. |
NO | Directs Log Master to assume that a compression dictionary was rebuilt by each LOAD RESUME LOG NO event. |
For more information, see the LOADNCP=NO entry in Installation option descriptions.
EARLY RECALL
Directs Log Master to determine whether any log files that it requires have been migrated (marked as archived in the ICF catalog). If so, Log Master issues requests to recall the required data sets before they are needed for log processing. This action avoids delays in log processing by giving the storage management software in your environment time to retrieve the required data sets.
You must set EARLY RECALL to YES to enable any of the other early recall keywords (such as MIGRATE or DASD DATASETS). The SMSTASKS keyword defines limits for the EARLY RECALL keyword. The Log Master early recall feature works with most storage management software. The related data set migration feature can migrate only data sets that are managed by the IBM DFSMShsm product.
Before you enable early recall of archived data sets, evaluate the settings of the SMSTASKS, MIGRATE, and DASDDSNS keywords to be sure that they are appropriate for your environment.
Value | Description |
---|---|
YES | (Default) Directs Log Master to issue early recall requests. |
NO | Prevents Log Master from issuing early recall requests. |
For more information, see the ERLYRCL=YES entry in Installation option descriptions.
SMSTASKS nnn
Determines the maximum number of early recall subtasks that Log Master creates. Use this keyword to optimize performance when log processing requires large numbers of migrated files.
Enter a maximum number of subtasks. To allow Log Master to determine the number of subtasks, enter 0 (the default value). When you specify 0, Log Master sets the SMSTASKS value as follows:
- In a non-data sharing environment, Log Master sets SMSTASKS to 1.
- In a data sharing environment, Log Master sets SMSTASKS equal to the number of data sharing members.
For more information, see the SMSTASKS=0 entry in Installation option descriptions.
PER MEMBER (for SMSTASKS)
Specifies the scope of the SMSTASKS value. This keyword applies only in a data sharing environment.
Value | Description |
---|---|
YES | Indicates that the value is a limit for each member of a data sharing environment. If you specify YES and the SMSTASKS value is currently set to 0, Log Master sets the SMSTASKS value as follows:
|
NO | (Default) Indicates that the value is a limit for the entire data sharing group. |
For more information, see the MTPRMBR=NO entry in Installation option descriptions.
MIGRATE
Directs Log Master to request that any recalled data sets be migrated to their original status. The default value is YES. The DASD DATASETS keyword places limits on the MIGRATE keyword. Log Master issues migration requests at the end of a job, or when a job requires a greater number of data sets than the DASD DATASETS value. If a job requires more data sets than the DASDDSNS value permits, Log Master migrates data sets even when this keyword is set to NO.
Log Master always attempts to migrate the data set back to its original migration level. The Log Master data set migration feature can migrate only data sets that are managed by the IBM DFSMShsm product. The related early recall feature works with most storage management software.
Before you enable migration of recalled data sets, evaluate the settings of the SMSTASKS, ERLYRCL, and DASDDSNS keywords to ensure that they are appropriate for your environment.
Value | Description |
---|---|
YES | Directs Log Master to issue migration requests. |
NO | Prevents Log Master from issuing migration requests (unless the DASDDSNS value is exceeded). |
For more information, see the MIGRATE=YES entry in Installation option descriptions.
WAIT
Determines whether Log Master terminates at the end of processing or waits for all data set migration requests to complete. If you do not specify this keyword, Log Master uses the value of the MIGRWAIT installation option.
For more information, see the MIGRWAIT=NO entry in Installation option descriptions.
DASD DATASETS nnn
Determines the number of recalled data sets that Log Master attempts to maintain on DASD at any one time. Use this keyword to minimize DASD requirements that might result when log processing requires large numbers of archived files.
To honor the DASD DATASETS value, Log Master issues requests to migrate any recalled data sets to their original migration status and level. If a job requires more data sets than this value, Log Master issues requests to migrate data sets even if the value of the MIGRATE keyword is NO.
Depending on how quickly your environment processes migrate requests, for short periods of time, the number of recalled data sets on DASD might be greater than the DASD DATASETS value. Log Master can migrate only data sets managed by the IBM DFSMShsm product. If DFSMShsm is not available in your environment, Log Master might not be able to honor the DASD DATASETS limit value.
Enter a number to limit the number of data sets that are maintained on DASD. To avoid imposing a limit, enter 0. The default value is determined by the DASDDSNS installation option.
For more information, see the DASDDSNS=10 entry in Installation option descriptions.
PER MEMBER (for DASD DATASETS)
Specifies the scope of the DASD DATASETS value. This keyword applies only in a data sharing environment.
Value | Description |
---|---|
YES | (default) Indicates that the value is a maximum limit for each member of a data sharing environment. |
NO | Indicates that the value is a maximum limit for the entire Db2 subsystem, or the entire data sharing group. |
For more information, see the DDPRMBR=YES entry in Installation option descriptions.
GENERATE MASSDELETE
Determines the output that Log Master generates when it encounters Db2 log records that reflect a LOAD REPLACE action. When a Db2 Load utility runs with the REPLACE option, the Db2 log contains log records that Log Master can interpret as a 'mass delete' action (similar to a DELETE statement with no WHERE clause or a TRUNCATE statement). This keyword controls whether Log Master includes the mass delete action in the generated output.
Value | Description |
---|---|
YES | (Default) Directs Log Master to include the mass delete action in the generated output. |
NO | Prevents Log Master from including the mass delete action. (For example, mass delete actions might not be appropriate for auditing or historical databases.) |
For more information about LOAD REPLACE actions, see the GENMDEL=YES entry in Installation option descriptions.
HEURISTIC FORWARDCOMPLETION
Determines whether Log Master uses a key store to perform heuristic completion. A key store is a Log Master internal memory and file structure. Heuristic completion is a special type of row completion processing that is separate from and different than the more extensive row completion processing that Log Master performs. For more information about row completion, see IMAGESOURCE.
Heuristic completion imposes a small amount of overhead during the initial log scan to decrease the possibility of more overhead after the initial log scan. (After the initial scan, Log Master performs more row completion processing, which can include reading table spaces, reading image copies, or reading more log files). Depending on your environment, disabling heuristic completion can increase or decrease the performance of Log Master. To determine the effects of heuristic completion in your environment, examine your job’s output for a series of messages that start with message BMC97396 and list the key store as FCUSE.
The following conditions influence the results:
- Whether you define Db2 objects with Data Capture Changes (DCC).
- The patterns of insert, update, and delete activity on your Db2 objects.
- The amount of memory that you allocate to Log Master.
The percentage of memory that you allocate to different key stores
For more information, see the MEMPERCENT entry in STOREOPTS statement.
Value | Description |
---|---|
YES | (Default) Directs Log Master to perform heuristic completion. |
NO | Prevents Log Master from performing heuristic completion. |
For more information, see the FCUSE=YES entry in Installation option descriptions.
PROCESS COLD START URIDS
Specifies that Log Master will process transactions that were in process (but were not terminated) when a Db2 subsystem was cold started. In this context, transactions are considered to be the same as URIDs. Use this keyword to obtain information about the unterminated transactions (for example, in a report). Specify a scan range that includes Db2 processing before and after the cold start.
By default, Log Master does not process a transaction until that transaction is either committed or aborted. Because of this behavior, Log Master does not include transactions that are interrupted by a cold start in generated output. Similarly, Log Master can retain unterminated transactions in the Repository, causing ongoing log scans to read more log files than necessary as they search for commit or abort actions.
However, when you include this keyword, Log Master uses conditional restart information contained in the bootstrap data set (BSDS) to complete the following actions:
- Locate all unterminated transactions on the subsystem before the cold start.
- Mark the transactions as committed or aborted within the Log Master internal control structures.
Log Master can then process transactions, select them, include them in any generated output, and mark them as committed in the Repository.
QUIESCE AGING n
Overrides the default QUIESCEAGING installation option.
For more information, see QUIESCEAGING=-1 entry in Installation option descriptions.
RETAIN TIME
Specifies the number of days that Log Master keeps ongoing and history records in the Repository tables for a specified work ID.
Value | Description |
---|---|
ALL | Log Master bypasses record deletion and retains all records. |
NONE | Log Master deletes all records. |
n | Log Master deletes records that are older than n days. Valid values for n are 1 through 32767. |
RETAIN Time overrides the default RETAINTIME installation option. For more information, see RETAINTIME.
RESOURCE SELECTION
Specifies the RESOURCE SELECTION option to specify the order in which Log Master uses image copy resources for both completion processing and compression dictionary access for data decompression.
Value | Description |
---|---|
COPIES | You can specify RESOURCE SELECTION COPIES as FC, LP, LB, RP, and RB in any order. LP and LB indicate the primary and secondary local image copies. RP and RB indicate the primary and secondary remote image copies. FC indicates an IBM FlashCopy. The default order is (FC, LP, LB) when operating as a local site, and (RP, RB, FC) when operating in a remote site. You can specify one to five values from FC, LP, LB, RP, and RB in the sequence in any order. You can omit references to copies of the resource that you do not want considered. For more information, see the LOCCPSEL=(FC,LP,LB) and REMCPSEL=(RP,RB,FC) entries in Installation option descriptions. |
LOGS | Specifies the order in which Log Master reads active and archive log files. For all log scans, Log Master searches the log files in the order that you enter. Specify log keywords in any order. Omit keywords for log files that you do not want Log Master to consider. If you omit keywords, Log Master can terminate with error messages if it cannot find required log records in the log files that you have specified. The default order is ACT1, ACT2, ARCH1, and ARCH2. You must specify at least one of the values. Use OPTIONS RESOURCE SELECTION LOGS to specify a value to override the USELOGS installation option value. For more information, see the USELOGS=(ACT1, ACT2, ARCH1, ARCH2) entry in Installation option descriptions. |
URID THRESHOLD
Overrides the default URIDTHR installation option.
For more information, see the URIDTHR=0 entry in Installation option descriptions.
USE UTILITY DELETES
Specifies whether Log Master should process and use the delete records logged by the DSNUTILB utility when invoked by the Db2 LOAD or REPAIR utilities or by the EXEC SQL statement.
These delete records are created when:
- DSNUTILB drops individual rows that do not meet a unique index constraint during the index build process
- The Db2 REPAIR utility or EXEC SQL DELETE statement explicitly deletes rows
- EXEC SQL UPDATE statements cause overflow of data pages
Value | Description |
---|---|
YES | (Default) Directs Log Master to process and use the delete records logged by the DSNUTILB utility. |
NO | Directs Log Master to ignore delete records logged by the DSNUTILB utility. |
For more information, see the USEUTILITYDELETES=YES entry in Installation option descriptions.
ZIIP
Controls whether Log Master attempts to use IBM System z Integrated Information Processors (zIIPs). Log Master can use enclave service request blocks (SRBs) to enable zIIP processing automatically while running jobs. Using zIIP processing can reduce the overall CPU time for Log Master jobs. To enable and use zIIP processing with Log Master, you must meet the following requirements:
- Have an installed and authorized version of the EXTENDED BUFFER MANAGER product or the SNAPSHOT UPGRADE FEATURE technology.
- Start and maintain an XBM subsystem in your environment.
- Have a zIIP available in your environment.
For more information about the XBM component that enables the use of zIIPs, see the SNAPSHOT UPGRADE FEATURE for DB2 documentation.
Value | Description |
---|---|
ENABLED | (Default) If a zIIP is available, Log Master attempts to offload eligible processing to the zIIP. If the zIIP is busy or not available, normal processing continues on a general-purpose processor. |
DISABLED | Log Master will not attempt to use zIIP processing. |
For more information, see the ZIIP=ENABLED entry in Installation option descriptions.
Overtime mode
Log Master provides the ability to run in overtime mode.
In overtime mode, Log Master reads all of the log records that are related to selected objects, regardless of whether the objects exist in the Db2 catalog. In normal operation, (called current mode), Log Master reads the Db2 catalog to get information about the structure of selected objects. However, when an object is dropped, Db2 deletes all references to the object from the Db2 catalog. In overtime mode, Log Master must use other sources to obtain structure definitions.
Use overtime mode when the current time is after a drop action or a drop and re-create action, but you need to retrieve log records that were written before the action.
The following considerations apply to overtime mode:
- Overtime mode should be used only when you need to retrieve log records for dropped objects. If you do not need to retrieve log records for dropped objects, BMC recommends running Log Master in current mode.
- An overtime job typically uses more resources and experiences more processing overhead than a job that runs in current mode.
- Log Master refers to dropped Db2 objects (or Db2 objects that have been dropped and re-created) as old objects. Overtime mode enables Log Master to process log records that are related to old objects.
- In overtime mode, Log Master uses other sources to obtain structure definitions of old objects. You must perform at least one of the following types of extra processing to update these sources:
- Run periodic jobs that update the Log Master Repository with structure definitions for your old objects (proactive method).
- Run an extra log scan that updates the Repository immediately before you use overtime mode (reactive method).
- Log Master refers to the version of a Db2 object that exists between a create action and the following drop action as an instance of that object. Depending on the time frame of your log scan, and the times when an object is dropped and re-created, Log Master can encounter log records that are related to multiple instances of the same object.
- By default, Log Master does not perform row completion processing when it runs in overtime mode. Optionally, you can specify the ATTEMPT COMPLETION keyword and the IMAGECOPY statement to direct Log Master to perform row completion processing using available image copy data sets.
- Overtime mode is not the same as the Log Master automated drop recovery feature. Drop recovery restores dropped objects. Overtime mode enables you to retrieve data from the Db2 log that is associated with old objects, and to generate output that is based on the data. Overtime mode does not restore the old objects.
- If you use the proactive method to update the Repository, schedule the update jobs to run before the following processing:
- Regular production processing. For example, if you run a set of jobs every week, you should run a job to update the repository before you run the weekly processing jobs.
- Db2 Load or Reorg actions that update compression dictionaries, or that might assign table rows to different record ID (RID) values.
If you update the Repository regularly, BMC Software recommends that you also run regular jobs to delete old or unusable data (particularly any compression dictionaries) from the Old Objects Table. Alternately, you can delete (or display) information from this table by using an option on the Main Menu of the Log Master online interface.
- The user-provided resources for completion processing must be accurate and complete in order to avoid errors.
- For system-time temporal tables, the Log Master Repository does not maintain the versioning relationship between the system-maintained table and its associated history table. Therefore, for overtime processing, your filter must include both the base table and the history table.
For more information about overtime mode, see Processing-objects-over-time.
Related topics