Space announcements

   

This space provides the same content as before, but the organization of the home page has changed. The content is now organized based on logical branches instead of legacy book titles. We hope that the new structure will help you quickly find the content that you need.

OPTION statement

The OPTION statement specifies global options for use in a Log Master job. You can use the OPTION statement to override some installation options. If needed, enter one OPTION statement for each job. The OPTION statement must appear before any other statements.

Note

Use REPOS UPDATE option to update the old objects Repository table instead of reactive method (with an old objects data set) to obtain old object structure definitions.

Note

OLD OBJECTS is deprecated with PTF BQU2282.


The following figure shows the syntax for the OPTION statement.

     
    DASD DATASETS nnn

    Determines the number of recalled data sets that Log Master attempts to maintain on DASD at any one time. Use this keyword to minimize DASD requirements that might result when log processing requires large numbers of archived files.

    To honor the DASD DATASETS value, Log Master issues requests to migrate any recalled data sets to their original migration status and level. If a job requires more data sets than this value, Log Master issues requests to migrate data sets even if the value of the MIGRATE keyword is NO.

    Depending on how quickly your environment processes migrate requests, for short periods of time, the number of recalled data sets on DASD might be greater than the DASD DATASETS value. Log Master can migrate only data sets managed by the IBM DFSMShsm product. If DFSMShsm is not available in your environment, Log Master might not be able to honor the DASD DATASETS limit value.

    Enter a number to limit the number of data sets that are maintained on DASD. To avoid imposing a limit, enter 0. The default value is determined by the DASDDSNS installation option (see the DASDDSNS=10 entry in Installation option descriptions).

    DATEFMT

    Determines the format that Log Master uses to display date and time data on all reports. Log Master supports the following date and time formats. The default value is ISO.

    • USA

      MM/DD/YYYY/HH:MM:SS.nnnnnn

    • EUR

      DD.MM.YYYY.HH.MM.SS.nnnnnn

    • ISO

      YYYY-MM-DD-HH.MM.SS.nnnnnn

    • JIS

      YYYY-MM-DD-HH:MM:SS.nnnnnn

    When running on DB2 Version 10 and later, Log Master supports precision timestamps up to 12 digits, and inclusion of a time zone in the timestamp (YYYY-MM-DD-HH:MM:SS.nnnnnnnnnnnn±HH:MM).

    Log Master does not update the Repository. If you specify NO, you cannot specify the REPOS UPDATE or REPOS DELETE keywords on the LOGSCAN statement.

    DICTIONARYSPACE value

    Limits the amount of memory that Log Master uses to store compression dictionaries during processing.

    Adjusting the value can change the performance of log scans that read compressed table spaces. By default, an unlimited amount of memory is available for storing compression dictionaries, and BMC recommends using retaining the default. However, if you change the DICTIONARYSPACE value to limit the memory, the following considerations apply:

    • The minimum value is 192 KB (the size of three dictionaries). If you enter a value that is less than the minimum, Log Master ignores your specified value and uses the minimum value.

    • Allocation amounts are site specific. Perform the following steps to calculate a DICTIONARYSPACE value:

      1. For each compressed table space that a job uses, multiply the number of partitions by the page size (for example, 32K).

      2. Multiply that value by 16.

      3. Add the values for all compressed table spaces that a job uses.

    • Log Master uses this memory dynamically, loading compression dictionaries, as required. When it reaches the DICTIONARYSPACE limit, it discards the least recently used dictionaries. With a low DICTIONARYSPACE value and large numbers of compressed table spaces, Log Master can load the same dictionary more than once, resulting in degraded performance.

    • Specify allocation amounts in kilobytes (using the suffix K) or megabytes (using the suffix M).

    • Log Master honors the DICTIONARYSPACE limit during most processing, but can exceed the limit when reading a compression dictionary from the DB2 log or from an image copy during completion processing.

    For information about estimating overall memory, see Estimating overall memory (REGION).

    EARLY RECALL

    Directs Log Master to determine whether any log files that it requires have been migrated (marked as archived in the ICF catalog). If so, Log Master issues requests to recall the required data sets before they are needed for log processing. This action avoids delays in log processing by giving the storage management software in your environment time to retrieve the required data sets.

    You must set EARLY RECALL to YES to enable any of the other early recall keywords (such as MIGRATE or DASD DATASETS). The SMSTASKS keyword defines limits for the EARLY RECALL keyword. The Log Master early recall feature works with most storage management software. The related data set migration feature can migrate only data sets that are managed by the IBM DFSMShsm product.

    Before you enable early recall of archived data sets, evaluate the settings of the SMSTASKS, MIGRATE, and DASDDSNS keywords to be sure that they are appropriate for your environment.

    • YES

      Directs Log Master to issue early recall requests. This is the default value.

    • NO

      Prevents Log Master from issuing early recall requests.

    See the ERLYRCL=YES entry in Installation option descriptions.

    EXECUTION MODE

    Specifies the execution mode for Log Master.

    • CURRENT

      Directs Log Master to run in current mode. In this mode, Log Master selects log records relating to the objects that you specify, but only if the objects currently exist in the DB2 catalog. This is the default value.

    • OVERTIME

      Directs Log Master to run in overtime mode. In this mode, Log Master selects log records relating to all of the objects that you specify, regardless of whether the objects currently exist in the DB2 catalog. For more information about overtime mode, see Overtime mode, or see the section about objects over time in the Processing objects over time.

      To update the Repository, you must select overtime mode. If you specify more than one logical log control file as input (directly or by specifying a GDG base), Log Master automatically runs in overtime mode.

      The ATTEMPT COMPLETION keyword enables Log Master to plan for and use image copies during row completion processing in overtime mode. By default, when Log Master runs in overtime mode it does not perform row completion on any log records associated with objects that do not exist in the DB2 catalog.

      To complete log records for objects that do not exist in the DB2 catalog, take two actions: specify this keyword and include an IMAGECOPY statement (to specify the names of image copy data sets that contain the desired objects). For more information about the IMAGECOPY statement, see IMAGECOPY statement. Depending on the objects you select and the activity related to those objects, you might need to specify multiple image copy data sets. These actions increase the chances of successful row completion processing.

      See also EXECMODE=CURRENT.

    FILECOST

    Assigns a relative cost to the act of reading a separate file, mounting a tape, or reading a segmented table space. Log Master uses the FILECOST values only if the IMAGESOURCE keyword is set to ANY.

    Log Master uses the cost values as it chooses a source for row completion processing. Use this syntax to adapt Log Master row completion processing to your environment. To enter cost values, express them in terms of the cost of reading one page from the DB2 log.

    • costFile

      Assigns a relative cost to processing an additional data set, expressed in terms of the cost to read one page from the DB2 log. In many cases, the data set contains image copy information. To increase the probability that Log Master uses image copies for row completion, enter a lower value. The default value is 2,000.

      To improve performance, as Log Master chooses between available image copy data sets, it assigns a lower costFile value to any Instant Snapshot image copies that are available. Instant Snapshot image copies are created on intelligent hardware storage devices by the NGT Copy product with SNAPSHOT UPGRADE FEATURE (SUF).

      See the CSTFILE=2000 entry in Installation option descriptions.

    • costMount

      Assigns a relative cost to performing a single tape mount, expressed in terms of the cost to read one page from the DB2 log. The default value is 25,000.

      See the CSTMOUNT=25000 entry in Installation option descriptions.

    • costSeg

      Assigns a relative cost to obtaining information from a segmented or universal table space, expressed in terms of the cost to read one page from the DB2 log. The default value is 2,000,000,000.

      For more information about the risks of row completion failure with mass delete actions in these types of table spaces, see the CSTSEG=2000000000 entry in Installation option descriptions.

    FILTER METHOD

    Determines how and when Log Master obtains DB2 catalog information (DBIDs, OBIDs, or PSIDs) for the DB2 objects that are named in a filter. Log Master reads the DB2 catalog to resolve the names of DB2 objects into numeric identifiers. Log Master can read the catalog either once during the initial analyze phase of processing for all objects, or repeatedly as it encounters each object in the DB2 log.

    • STATIC

      Directs Log Master to obtain DB2 catalog information during the analyze phase of processing. Log Master obtains information for all DB2 objects that the filter explicitly names. This is the default value.

    • DYNAMIC

      Directs Log Master to obtain DB2 catalog information dynamically as it scans the DB2 log. Log Master obtains information for the DB2 objects that are present in the scanned log records and selects log records for only the objects that are named in the filter.

    Consider the following performance implications as you choose a value:

    • When you choose STATIC, Log Master can experience degraded performance during the analyze phase when all of the following conditions exist:

      • Your DB2 subsystem contains a very large number of objects (for example, some enterprise resource planning applications generate tens of thousands of objects).

      • Your filter uses a LIKE or NOT LIKE operator (for example, TABLE NAME LIKE OWNER.%).

      • The number of objects that are actually updated during the range of the log scan is significantly smaller than the number of objects that are named by the filter.

    • When you choose DYNAMIC, Log Master can incur extra processing overhead during the log scan as it regenerates the filter each time it encounters a new DB2 object.

    Note

    BMC Software recommends retaining the default value unless you experience degraded performance during the Log Master analyze phase. Use the elapsed time values provided by message BMC097024 ANALYZE FINISHED to determine the performance of the analyze phase.

    If you select dynamic filtering (DYNAMIC), do not specify the following items. The processing required for these items is not compatible with dynamic filtering.

    • The GENERATE EMPTY FILES keyword for load file output.

    • A value of YES for the USELGRNG keyword of the OPTION statement.

    • The LASTQUIESCE keyword of the scan range definition for REDO SQL output.

    See also FLTRMTHD=STATIC.

    FILTERREL

    Specifies the relational operator that Log Master uses to connect multiple filters.

    • AND

      Directs Log Master to connect filters with an AND relational operator. This is the default value.

    • OR

      Directs Log Master to connect filters with an OR relational operator.

    GENERATE MASSDELETE

    Determines the output that Log Master generates when it encounters DB2 log records that reflect a LOAD REPLACE action. When a DB2 Load utility runs with the REPLACE option, the DB2 log contains log records that Log Master can interpret as a 'mass delete' action (similar to a DELETE statement with no WHERE clause or a TRUNCATE statement). This keyword controls whether Log Master includes the mass delete action in the generated output.

    • YES

      Directs Log Master to include the mass delete action in the generated output. This is the default value.

    • NO

      Prevents Log Master from including the mass delete action. (For example, mass delete actions might not be appropriate for auditing or historical databases.)

    For more information about LOAD REPLACE actions, see the GENMDEL=YES entry in Installation option descriptions.

    HEURISTIC FORWARDCOMPLETION

    Determines whether Log Master uses a key store to perform heuristic completion. A key store is a Log Master internal memory and file structure. Heuristic completion is a special type of row completion processing that is separate from and different than the more extensive row completion processing that Log Master performs. For more information about row completion, see IMAGESOURCE.

    Heuristic completion imposes a small amount of overhead during the initial log scan to decrease the possibility of more overhead after the initial log scan. (After the initial scan, Log Master performs more row completion processing, which can include reading table spaces, reading image copies, or reading more log files). Depending on your environment, disabling heuristic completion can increase or decrease the performance of Log Master. To determine the effects of heuristic completion in your environment, examine your job’s output for a series of messages that start with message BMC97396 and list the key store as FCUSE.

    The following conditions influence the results:

    • Whether you define DB2 objects with Data Capture Changes (DCC).

    • The patterns of insert, update, and delete activity on your DB2 objects.

    • The amount of memory that you allocate to Log Master.

    • The percentage of memory that you allocate to different key stores (for more information, see the MEMPERCENT entry in STOREOPTS statement).

    Note

    BMC recommends that you consult BMC Software Customer Support before changing the default value.

    • YES

      Directs Log Master to perform heuristic completion. This is the default value.

    • NO

      Prevents Log Master from performing heuristic completion.

    See the FCUSE=YES entry in Installation option descriptions.

    IMAGESOURCE

    Specifies the source that Log Master uses to perform row completion processing. If you do not enter a value for this keyword, Log Master uses the value of the IMAGESRC installation option. Valid values are as follows:

    • ANY

      Perform row completion from any source available, including the DB2 log, the table space itself, or an image copy. When you select this value, Log Master uses the values of the FILECOST keyword to select a row completion source.

    • TABLESPACE

      Perform row completion from the table space only. This value increases the risk that Log Master can terminate with either BMC097386 'unable to decompress,' or one of several 'unable to complete' error messages.

    • SYSCOPY

      Perform row completion using only resources from the SYSIBM.SYSCOPY catalog table (including image copies or other events such as LOAD LOG YES actions). For more information on the types of image copies that Log Master can read, see Reading image copies.

    • LOGONLY

      Perform row completion from the DB2 log only.

    Log Master performs row completion processing to rebuild a complete image of a table row at a given point in time. Unless a table is defined with the Data Capture Changes (DCC) attribute, the log record of an update action usually contains only part of the table row (enough to include the changed data). Log Master uses the record ID (RID) value in the log record to obtain information about the row from other sources. For more information about row completion processing, see Row completion processing and your jobs .

    See also IMAGESRC=ANY.

    Reading image copies

    Log Master uses image copies for row completion processing, or to obtain compression dictionaries. Be aware of the following points regarding how Log Master uses image copies:

    • When Log Master requires image copies, it attempts to read image copies in the following order:

      • For local sites, default order is (FC, LP, LB, RP, RB).

      • For remote sites, default order is (FC, RP, RB, LP, LB).

      You can change these defaults using syntax overrides or by changing installation option values. For more information, see the following sections:

    • Log Master can read Instant Snapshot image copies that were created on intelligent hardware storage devices by the BMC Next Generation Technology Copy for DB2 for z/OS product with SNAPSHOT UPGRADE FEATURE (SUF).

    • Log Master can read encrypted image copies created by NGT Copy if the name of the key data set is provided by using the KEYDSNAM installation option.

    • Log Master can read cabinet copies created by NGT Copy. Cabinet copies contain a group of table spaces and indexes within a single cabinet file to provide performance improvements when managing large numbers of small table spaces.

    • To read Instant Snapshot, encrypted, or cabinet image copies, both Log Master and NGT Copy must use the same instance of the BMC Software BMC_BMCXCOPY table.

    • Log Master cannot read Data Facility Storage Management System (DFSMS) concurrent image copies, regardless of how they were created (by using the CONCURRENT keyword of a DB2 Copy utility, or by using DFSMS outside of DB2). If the only available source for row completion processing or a dictionary is a concurrent image copy, Log Master might encounter errors or terminate abnormally.

    • Consider running regular jobs to update the Log Master Repository with copies of compression dictionaries. This action can improve overall performance by enabling Log Master to avoid mounting image copies to retrieve dictionaries to process log records of compressed table spaces.

    MIGRATE

    Directs Log Master to request that any recalled data sets be migrated to their original status. The default value is YES. The DASD DATASETS keyword places limits on the MIGRATE keyword. Log Master issues migration requests at the end of a job, or when a job requires a greater number of data sets than the DASD DATASETS value. If a job requires more data sets than the DASDDSNS value permits, Log Master migrates data sets even when this keyword is set to NO.

    Log Master always attempts to migrate the data set back to its original migration level. The Log Master data set migration feature can migrate only data sets that are managed by the IBM DFSMShsm product. The related early recall feature works with most storage management software.

    Before you enable migration of recalled data sets, evaluate the settings of the SMSTASKS, ERLYRCL, and DASDDSNS keywords to ensure that they are appropriate for your environment.

    • YES

      Directs Log Master to issue migration requests.

    • NO

      Prevents Log Master from issuing migration requests (unless the DASDDSNS value is exceeded).

    See the MIGRATE=YES entry in Installation option descriptions.

    MINLOGPT

    Specifies how Log Master determines the log scan end point when it runs in a data sharing environment. Specify YES to obtain the same end point across all members of the data sharing group.

    • NO

      Indicates that Log Master does not require a common end point for all members. Log Master uses the end point that you specify. This is the default value. When you run an ongoing log scan in a data sharing environment, Log Master dynamically sets the value of this keyword to YES, regardless of the value that you specify.

    • YES

      Indicates that Log Master requires a common end point for all members of the data sharing group. A common end point is important for ongoing log scans, where Log Master requires common end points and start points to select all of the desired log records across multiple runs.

    If you do not enter a value for MINLOGPT, Log Master uses the value of the corresponding installation option. For more information (including a diagram), see the MINLOGPT=NO entry in Installation option descriptions.

    OLD OBJECTS
    dataSetName

    Specifies the name of the old objects data set that Log Master uses. You can create an old objects data set to hold structure definitions of DB2 objects that are not currently defined in the DB2 catalog (old objects). The old objects data set should be used only when Log Master is operating in overtime mode. For more information about the syntax used within this data set, see Old objects data set syntax.

    Note

    OLD OBJECTS is deprecated with PTF BQU2282.

    PER MEMBER (for DASD DATASETS)

    Specifies the scope of the DASD DATASETS value. This keyword applies only in a data sharing environment.

    • YES

      Indicates that the value is a maximum limit for each member of a data sharing environment. This is the default value.

    • NO

      Indicates that the value is a maximum limit for the entire DB2 subsystem, or the entire data sharing group.

    See the DDPRMBR=YES entry in Installation option descriptions.

    PER MEMBER (for SMSTASKS)

    Specifies the scope of the SMSTASKS value. This keyword applies only in a data sharing environment.

    • YES

      Indicates that the value is a limit for each member of a data sharing environment. If you specify YES and the SMSTASKS value is currently set to 0, Log Master sets the SMSTASKS value as follows:

      • In a non-data sharing environment, Log Master sets SMSTASKS to a value of 1.

      • In a data sharing environment, Log Master sets SMSTASKS to a value equal to the number of members in the data sharing group.

    • NO

      Indicates that the value is a limit for the entire data sharing group. This is the default value.

    See the MTPRMBR=NO entry in Installation option descriptions.

    PROCESS COLD START URIDS

    Specifies that Log Master will process transactions that were in process (but were not terminated) when a DB2 subsystem was cold started. In this context, transactions are considered to be the same as URIDs. Use this keyword to obtain information about the unterminated transactions (for example, in a report). Specify a scan range that includes DB2 processing before and after the cold start.

    By default, Log Master does not process a transaction until that transaction is either committed or aborted. Because of this behavior, Log Master does not include transactions that are interrupted by a cold start in generated output. Similarly, Log Master can retain unterminated transactions in the Repository, causing ongoing log scans to read more log files than necessary as they search for commit or abort actions.

    However, when you include this keyword, Log Master uses conditional restart information contained in the bootstrap data set (BSDS) to complete the following actions:

    • Locate all unterminated transactions on the subsystem before the cold start.

    • Mark the transactions as committed or aborted within the Log Master internal control structures.

    Log Master can then process transactions, select them, include them in any generated output, and mark them as committed in the Repository.

    PROCESS PITS

    Specifies that, as Log Master scans the log, it will include any log records that fall within the range of a Point-in-Time (PIT) recovery. By default, Log Master does not select log records within a PIT range, regardless of your WHERE clause or filter. In rare situations, you might need log records from within a PIT range.

    A PIT recovery is a partial recovery (performed with a DB2 Recover utility) that restores a set of DB2 objects to their state at a previous point in time. After a PIT recovery is performed on a set of objects, the DB2 log contains a range of log records for those objects that are no longer valid (because the objects were recovered to a point before the log records were created). This range of invalid log records is called a PIT range. Information about PIT ranges is stored in the SYSIBM.SYSCOPY table of the DB2 catalog.

    Warning

    BMC Software does not recommend using log records from within a PIT range. Exercise caution as you select log records within a PIT range or use the information contained in those log records.

    If you take actions or apply changes to your database based on the output of a log scan that combines PIT and normal log records, you can corrupt the data in your database.

    To process from log records within a PIT range, process the PIT log records in a separate log scan that covers only the PIT range.

    QUIESCE AGING n

    Overrides the default QUIESCEAGING installation option. For more information, see QUIESCEAGING=-1.

    REPOS (PTF BQU2282 applied)

    Determines whether Log Master updates the Repository during the current job step.

    • YES

      Log Master updates the Repository. The default value is YES.

    • NO

      Log Master does not update the Repository. If you specify NO, you cannot specify the REPOS UPDATE, REPOS DELETE, or ONGOING keywords on the LOGSCAN statement.

    RESOURCE SELECTION

    Specifies the RESOURCE SELECTION option to specify the order in which Log Master uses image copy resources for both completion processing and compression dictionary access for data decompression.

    • COPIES

      You can specify RESOURCE SELECTION COPIES as FC, LP, LB, RP, and RB in any order. LP and LB indicate the primary and secondary local image copies. RP and RB indicate the primary and secondary remote image copies. FC indicates an IBM FlashCopy. The default order is (FC, LP, LB) when operating as a local site, and (RP, RB, FC) when operating in a remote site. You can specify one to five values from FC, LP, LB, RP, and RB in the sequence in any order. You can omit references to copies of the resource that you do not want considered.

      See also the LOCCPSEL=(FC,LP,LB) and REMCPSEL=(RP,RB,FC) entries in Installation option descriptions.

    • LOGS

      Specifies the order in which Log Master reads active and archive log files. For all log scans, Log Master searches the log files in the order that you enter. Specify log keywords in any order. Omit keywords for log files that you do not want Log Master to consider. If you omit keywords, Log Master can terminate with error messages if it cannot find required log records in the log files that you have specified. The default order is ACT1, ACT2, ARCH1, and ARCH2. You must specify at least one of the values.

      Use OPTIONS RESOURCE SELECTION LOGS to specify a value to override the USELOGS installation option value (see the USELOGS=(ACT1, ACT2, ARCH1, ARCH2) entry in Installation option descriptions).

    RETAIN TIME

    (PTF BQU2282 applied)

    Specifies the number of days that Log Master keeps ongoing and history records in the Repository tables for a specified work ID.

    • ALL
      Log Master bypasses record deletion and retains all records.
    • NONE
      Log Master deletes all records.
    • n
      Log Master deletes records that are older than n days. Valid values for n are 1 through 32767.

    RETAIN Time overrides the default RETAINTIME installation option. For more information, see RETAINTIME.

    SMSTASKS nnn

    Determines the maximum number of early recall subtasks that Log Master creates. Use this keyword to optimize performance when log processing requires large numbers of migrated files.

    Enter a maximum number of subtasks. To allow Log Master to determine the number of subtasks, enter 0 (the default value). When you specify 0, Log Master sets the SMSTASKS value as follows:

    • In a non-data sharing environment, Log Master sets SMSTASKS to 1.

    • In a data sharing environment, Log Master sets SMSTASKS equal to the number of data sharing members.

    See the SMSTASKS=0 entry in Installation option descriptions.

    SUBSYSTEM RBA RESET DATE(date) TIME(time)

    Use this keyword to provide Log Master with information to help discern which log records are valid for the objects implicated by the log scans. Specify this keyword when the following conditions exist:

    • You used the IBM procedure for resetting the log RBA to low values in a non-data sharing environment (cold start with STARTRBA=ENDRBA=0).

    • The most recently such completed cold start record has rolled off the conditional restart queue.

    • Following the reset, the subsystem RBA exceeds the create RBA of one or more of the implicated tables that were created before the reset.

    Specify a date and time that closely approximates the timestamp of that cold start.

    URID THRESHOLD

    Overrides the default URIDTHR installation option.

    See also the URIDTHR=0 entry in Installation option descriptions.

    USE UTILITY DELETES

    Specifies whether Log Master should process and use the delete records logged by the DSNUTILB utility when invoked by the DB2 LOAD or REPAIR utilities or by the EXEC SQL statement.

    These delete records are created when:

    • DSNUTILB drops individual rows that do not meet a unique index constraint during the index build process

    • The DB2 REPAIR utility or EXEC SQL DELETE statement explicitly deletes rows

    • EXEC SQL UPDATE statements cause overflow of data pages

    • YES

      Directs Log Master to process and use the delete records logged by the DSNUTILB utility. YES is the default.

    • NO

      Directs Log Master to ignore delete records logged by the DSNUTILB utility.

    See also the USEUTILITYDELETES=YES entry in Installation option descriptions.

    USELGRNG

    Determines whether Log Master uses the SYSIBM.SYSLGRNX table in the DB2 directory to determine the ranges for a log scan. Use this keyword only in a data sharing environment.

    Log Master reads the SYSLGRNX table only when a WHERE clause or filter refers to specific DB2 objects (columns, tables, table spaces, or databases). Log Master uses this table to determine whether the DB2 log of a data sharing member contains information about the database structures that are defined in your log scan. With this information, Log Master can avoid reading log files of members that show no activity during the initial log scan (before row completion processing). This action can improve overall performance. If you do not enter a value for USELGRNG, Log Master uses the value of the corresponding installation option.

    • YES

      Indicates that Log Master uses the SYSLGRNX table to determine valid ranges.

    • NO

      Indicates that Log Master does not use SYSLGRNX to determine valid ranges.

    Specifying YES can improve Log Master performance. However, Log Master can experience degraded performance reading the SYSLGRNX table if that table is not maintained (with a DB2 Modify utility). Use the elapsed time value provided by message BMC097168 to determine the performance of Log Master SYSLGRNX processing. To improve performance in this situation, specify NO.

    See the USELGRNG=NO entry in Installation option descriptions.

    WAIT

    Determines whether Log Master terminates at the end of processing or waits for all data set migration requests to complete. If you do not specify this keyword, Log Master uses the value of the MIGRWAIT installation option (see the MIGRWAIT=NO entry in Installation option descriptions).

    ZIIP

    Controls whether Log Master attempts to use IBM System z Integrated Information Processors (zIIPs). Log Master can use enclave service request blocks (SRBs) to enable zIIP processing automatically while running jobs. Using zIIP processing can reduce the overall CPU time for Log Master jobs. To enable and use zIIP processing with Log Master, you must meet the following requirements:

    • Have an installed and authorized version of the EXTENDED BUFFER MANAGER (XBM) product or the SNAPSHOT UPGRADE FEATURE (SUF) technology.

    • Start and maintain an XBM subsystem in your environment.

    • Have a zIIP available in your environment.

    For more information about the XBM component that enables the use of zIIPs, see the SNAPSHOT UPGRADE FEATURE for DB2 documentation .

    • ENABLED

      If a zIIP is available, Log Master attempts to offload eligible processing to the zIIP. If the zIIP is busy or not available, normal processing continues on a general-purpose processor. This is the default value.

    • DISABLED

      Log Master will not attempt to use zIIP processing.

    See also the ZIIP=ENABLED entry in Installation option descriptions.

    Overtime mode

    Log Master provides the ability to run in overtime mode.

    In overtime mode, Log Master reads all of the log records that are related to selected objects, regardless of whether the objects exist in the DB2 catalog. In normal operation, (called current mode), Log Master reads the DB2 catalog to get information about the structure of selected objects. However, when an object is dropped, DB2 deletes all references to the object from the DB2 catalog. In overtime mode, Log Master must use other sources to obtain structure definitions.

    Use overtime mode when the current time is after a drop action or a drop and re-create action, but you need to retrieve log records that were written before the action.

    The following considerations apply to overtime mode:

    • Overtime mode should be used only when you need to retrieve log records for dropped objects. If you do not need to retrieve log records for dropped objects, BMC recommends running Log Master in current mode.

    • An overtime job typically uses more resources and experiences more processing overhead than a job that runs in current mode.

    • Log Master refers to dropped DB2 objects (or DB2 objects that have been dropped and re-created) as old objects. Overtime mode enables Log Master to process log records that are related to old objects.

    • In overtime mode, Log Master uses other sources to obtain structure definitions of old objects. You must perform at least one of the following types of extra processing to update these sources:

      • Run periodic jobs that update the Log Master Repository with structure definitions for your old objects (proactive method).

      • Run an extra log scan that updates the Repository immediately before you use overtime mode (reactive method).

      • (Deprecated with PTF BQU2282) Perform manual research and data entry to create an old objects data set. 
    • Log Master refers to the version of a DB2 object that exists between a create action and the following drop action as an instance of that object. Depending on the time frame of your log scan, and the times when an object is dropped and re-created, Log Master can encounter log records that are related to multiple instances of the same object.

    • By default, Log Master does not perform row completion processing when it runs in overtime mode. Optionally, you can specify the ATTEMPT COMPLETION keyword and the IMAGECOPY statement to direct Log Master to perform row completion processing using available image copy data sets.

    • Overtime mode is not the same as the Log Master automated drop recovery feature. Drop recovery restores dropped objects. Overtime mode enables you to retrieve data from the DB2 log that is associated with old objects, and to generate output that is based on the data. Overtime mode does not restore the old objects.

    • If you use the proactive method to update the Repository, schedule the update jobs to run before the following processing:

      • Regular production processing. For example, if you run a set of jobs every week, you should run a job to update the repository before you run the weekly processing jobs.

      • DB2 Load or Reorg actions that update compression dictionaries, or that might assign table rows to different record ID (RID) values.

    • If you update the Repository regularly, BMC Software recommends that you also run regular jobs to delete old or unusable data (particularly any compression dictionaries) from the Old Objects Table. Alternately, you can delete (or display) information from this table by using an option on the Main Menu of the Log Master online interface.

      Warning

      If you store compression dictionaries in the repository, and then stop updating the repository, delete any residual compression dictionaries. Using an outdated compression dictionary from the repository can cause Log Master to fail with a S0C7 abend in member LZCOMPRS, which Log Master uses for decompressing data.

    • (Deprecated with PTF BQU2282) The old object structures and the user-provided resources for completion processing must be accurate and complete in order to process the old object log records correctly and avoid errors.

    • For system-time temporal tables, the Log Master Repository does not maintain the versioning relationship between the system-maintained table and its associated history table. Therefore, for overtime processing, your filter must include both the base table and the history table.

    For more information about overtime mode, see Processing objects over time.


    Was this page helpful? Yes No Submitting... Thank you

    Comments