Space announcements

   

This space provides the same content as before, but the organization of the home page has changed. The content is now organized based on logical branches instead of legacy book titles. We hope that the new structure will help you quickly find the content that you need.

LOGSCAN statement

The LOGSCAN statement determines how Log Master scans the DB2 log.

Use this statement to define

  • The portion of the DB2 log that Log Master scans 

  • The criteria Log Master uses to select log records

  • The types of output Log Master generates (reports or output files)

  • Whether Log Master creates a log mark

  • Other important characteristics

 

LOGSCAN statement syntax diagram

    The following figure shows the basic, high-level syntax of the LOGSCAN statement.


    LOGSCAN statement option descriptions

    To define a valid log scan, you must specify at least one form of output or one form of action on the Repository. You can specify multiple LOGSCAN statements, but Log Master scans the DB2 log only once for each work ID, no matter how many log scans, reports, or other forms of output you define.

    REPORTDirects Log Master to create reports using the information obtained in the log scan. For more information about the available reports, see LOGSCAN report definition.

    LLOG

    Directs Log Master to create a logical log file by using the information obtained in the log scan. A logical log is a readable version of the DB2 log that contains before and after images of database changes. For more information about generating a logical log, see LOGSCAN logical log output definition.

    SQL

    Directs Log Master to create an SQL output file by using the information obtained in the log scan. An SQL output file contains ANSI-standard SQL statements that either duplicate or reverse changes recorded in the DB2 log. For more information about generating an SQL output file, see LOGSCAN SQL file definition.

    LOAD

    Directs Log Master to create a load file that contains the log records specified in the log scan, formatted for a DB2 Load utility. Although Log Master uses the DB2 Load utility format to create the load file, the content is not the same as a load file created by a DB2 Unload utility. (The Log Master load file reflects activity over a period of time; a DB2 Unload utility creates a load file that reflects a given point in time.)

    A load file is usually accompanied by a load control file that contains parameters for a DB2 Load utility. For more information about generating a load file, see LOGSCAN load file definition.

    DDL

    Directs Log Master to create a data definition language (DDL) output file based on the information obtained in the log scan. A DDL output file contains ANSI-standard DDL statements that duplicate or reverse structural database changes recorded in the DB2 log. You can generate only MIGRATE or UNDO DDL; Log Master does not generate REDO DDL. For more information about generating a DDL output file, see LOGSCAN DDL file definition.

    Note

    When you generate DDL and command outputs, Log Master can generate a Catalog Activity report or update the Repository, but it cannot generate other forms of output in the same log scan (such as other reports or SQL). To generate other forms of output in the same job, include an additional log scan step.

    COMMAND

    Directs Log Master to create a command output file based on the information obtained in the log scan. A command output file contains BIND, REBIND, or FREE commands that duplicate or reverse structural database changes recorded in the DB2 log. You can generate only MIGRATE or UNDO commands; Log Master does not generate REDO commands. For more information, see LOGSCAN command file definition.

    Note

    When you generate DDL and command outputs, Log Master can generate a Catalog Activity report or update the Repository, but it cannot generate other forms of output in the same log scan (such as other reports or SQL). To generate other forms of output in the same job, include an additional log scan step.

    REPOS UPDATE

     Click here to expand for PTF BQU2282 applied

    Note

    OLD OBJECTS is deprecated with PTF BQU2282.

    Note

    PTF BQU2282 does not support the old object data set.

    (PTF BQU1717 applied) Directs Log Master to update the old objects Repository table (ALPOLDO) or the SYSCOPY Repository table (ALPSYSCP), or both. Each set of rows in the old objects table defines a DB2 object at a given point in time. When Log Master is running in overtime mode and it encounters log records related to an object that is not currently defined in the DB2 catalog (for example, after the object has been dropped), it can use the old objects information and the SYSCOPY records in the Repository to process those log records. Log Master can use compression dictionaries stored in the Repository when it runs in current or overtime mode unless the value of the REPOS keyword on the OPTION statement is NO.

    You can update the old objects information in the Repository from the following sources:

    • The DB2 log (used by default unless you specify the NOSCAN keyword)

    • The DB2 catalog (specify the PRIME FROM DB2 CATALOG keyword)

      (PTF BQU1717 applied) You can update the SYSCOPY information in the Repository from the following sources:

    • The DB2 log (used by default unless you specify the NOSCAN keyword)
    • The DB2 catalog, including SYSIBM.SYSCOPY tables (specify the PRIME FROM DB2 CATALOG keyword)
    • The information defined in the IMAGECOPY statements

    Values for REPOS UPDATE are as described in the following table. If you specify REPOS UPDATE without specifying one of these values, Log Master obtains old objects information from the DB2 log and does not include compression dictionaries or SYSCOPY information.

    REPOS UPDATE (without ONLY clause)

    Directs Log Master to update the old objects table (ALPOLDO)

    This is the default.

    NOSCANDirects Log Master not to use DB2 log as a source for old objects and SYSCOPY information for the Repository update
    PRIME FROM DB2CATALOG

    Directs Log Master to update the old objects and SYSCOPY information in the Repository with object descriptions and SYSCOPY records from the DB2 catalog

    Log Master searches for old objects information and SYSCOPY records in both the DB2 catalog and the DB2 log, unless you specify the NOSCAN keyword.

    You can store multiple versions of table objects in the Repository.

    You should run a REPOS UPDATE PRIME FROM DB2 CATALOG NOSCAN using the OVERTIME option after updating the table in the DB2 catalog and using the SYSCOPY options before you modify records out of SYSCOPY tables

    REPOS UPDATE include options

    INCLUDE DICTIONARY

    Directs Log Master to store both old objects information and copies of compression dictionaries in the old objects table of the Repository. When a valid compression dictionary is not available, Log Master can use a copy stored in the Repository to interpret log records of compressed table spaces. Log Master can obtain dictionaries from the following listed in order of preference:

    • Current table space

    • DB2 log

    • Old objects table of the Repository

    • Image copies of the table space

    For DB2 Load or Reorg actions that specify the DB2 KEEPDICTIONARY keyword, Log Master uses a record in the old objects table of the Repository as a placeholder to indicate that the corresponding compression dictionary was not rebuilt. The placeholder sets the values of REC_CNT and REC_SEQ to zero for the DB2 Load or Reorg action. Using a placeholder in this way ensures that for every rebuilt copy of a compression dictionary, Log Master stores only one dictionary object in the Repository. In addition, when using the dictionaries for decompression, Log Master loads only one copy in memory.

    INCLUDE DICTIONARY ONLYDirects Log Master to update ALPOLDO table only with compression dictionaries—no old objects in ALPOLDO and no SYSCOPY records in ALPSYSCP.
    INCLUDE SYSCOPY

    Directs Log Master to update the SYSCOPY table (ALPSYSCP) and ALPOLDO.

    INCLUDE SYSCOPY ONLYDirects Log Master to update the SYSCOPY table only.
    START range definition

    Specifies the start point when scanning for SYSCOPY records deleted from SYSIBM.SYSCOPY because of a DROP TABLESPACE or MODIFY, see Range definition.

    RBA, LRSN, DATE/TIME, and MARK are supported for REPOS UPDATE INCLUDE SYSCOPY.

    The following tables show the action that Log Master takes depending on the value of REPOS UPDATE.

    Product action determined by REPOS UPDATE value

    Syntax specification

    Product actions

    REPOS UPDATE (no parameters)

    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from only the DB2 log

    • Might not add a row for a defined object (if the scan range does not include a CREATE or DROP for that object)

    REPOS UPDATE (no parameters)

    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in the DB2 log that meet the criteria of the WHERE clause (use NOSCAN to prevent scan of DB2 log)

    REPOS UPDATE PRIME FROM DB2CATALOG

    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from both the DB2 log and the DB2 catalog

    • Adds a row for each unique CREATE and DROP RBA set which Log Master encounters for a defined object

    • When processing DB2 catalog, adds a new row to the repository if the current version of the DB2 catalog table does not match the current version of the Repository table

    REPOS UPDATE PRIME FROM DB2CATALOG

    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in either the DB2 log or the DB2 catalog that meet the WHERE clause (use NOSCAN to prevent scan of DB2 log)

    • Adds an additional row for any new versioned object found in the DB2 catalog that meets the WHERE clause

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN

    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from only the DB2 catalog

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN

    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in the DB2 catalog that meet the criteria of the WHERE clause

    Product action determined by REPOS UPDATE INCLUDE SYSCOPY value

    Syntax specification

    Product actions

    REPOS UPDATE INCLUDE SYSCOPY (no parameters)

    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from the DB2 log only, within the specified logscan range
    • Does not impact SYSCOPY log activity resulting from a DROP TABLESPACE or MODIFY

    REPOS UPDATE PRIME FROM DB2CATALOG INCLUDE SYSCOPY

    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from both the DB2 log and the DB2 catalog, within the specified logscan range
    • Does not impact SYSCOPY log activity resulting from a DROP TABLESPACE or MODIFY

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN INCLUDE SYSCOPY

    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from the DB2 catalog only, within the specified logscan range
    REPOS UPDATE INCLUDE SYSCOPY START <range definition>
    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from the DB2 log only, within the specified logscan range
    • Impacts SYSCOPY log activity resulting from a DROP TABLESPACE or MODIFY if within the specified START point and TO point

    Note the following points regarding the REPOS UPDATE keyword:

    • Log Master cannot update the Repository if the value of the REPOS keyword on the OPTION statement is NO.

    • To update the old objects table, SYSCOPY table, or both in the Repository, you must specify the following additional keywords:

      • Specify the DB2CATALOG keyword of the LOGSCAN statement as YES

      • Specify the EXECMODE keyword of the OPTION statement as OVERTIME or ensure that the value of the EXECMODE installation option is OVERTIME

    Note

    To keep the Repository current, run REPOS UPDATE PRIME FROM DB2 CATALOG NOSCAN with the required update type, OVERTIME or SYSCOPY, after you change old objects in the DB2 catalog or before you perform a Modify on SYSIBM.SYSCOPY.

    • When you specify REPOS UPDATE, Log Master can generate a DDL output file or a Catalog Activity report in the same log scan, but it cannot generate other forms of output, such as other reports or SQL. To generate other forms of output in the same job, include an additional, separate log scan step. You cannot include two REPOS UPDATE log scans in the same work ID.

    • Log Master only creates objects in the ALPOLDO table for compression dictionaries that are rebuilt by DB2, and adds only one copy of an unchanged compression dictionary to the table.

    • Schedule jobs that update the Repository to run before

      • Regular production processing. For example, if you run a set of jobs every week, you should run a job to update the Repository before you run the weekly processing jobs.

      • DB2 Load or Reorg actions that update compression dictionaries or that might assign table rows to different record ID (RID) values

    • If you update the Repository regularly, BMC recommends that you also run regular jobs to delete old or unusable data from the old objects table. Alternately, you can delete (or display) information from ALPOLDO by using an option on the Main Menu of the Log Master online interface.

    •  If you update the Repository regularly, BMC recommends that you also run regular jobs to delete old or unusable data from the old objects and SYSCOPY tables. Alternately, you can delete (or display) information from this tables by using an option on the Main Menu of the Log Master online interface

    • Log Master cannot update the Repository when the input source for your log scan is a logical log file generated by Log Master or another program (INPUT LLOG).

    • When you store compression dictionaries in the Repository, you should

      • Run periodic jobs that update the Repository with current dictionaries

        Be sure to update the Repository before you use a DB2 Modify utility to delete any information about selected objects from the SYSIBM.SYSCOPY table.

      • Delete any rows from the Repository that represent old compression dictionaries if you decide to stop storing dictionaries in the Repository

        If you do not delete the old dictionary rows, and you use a Modify utility to delete information from the SYSCOPY table, Log Master can use an incorrect dictionary from the Repository. For information about deleting rows from the Repository, see Maintaining the Repository.

    • You can update the old objects or SYSCOPY information in the Repository by using the Generate REPOS UPDATE JCL option on the Main Menu of the Log Master online interface.

    • DB2 does not store information about utilities executed against the SYSIBM.SYSCOPY table’s table space in SYSIBM.SYSCOPY.  This information can only be found in the log. BMC recommends including SYSIBM.SYSCOPY to their filter when running REPOS UPDATE INCLUDE SYSCOPY when updating the Log Master repository. Doing so will include the SYSCOPY information for SYSIBM.SYSCOPY found in the log during the LOGSCAN to the Log Master repository table.

    For more information about the old objects table, the Repository, and the associated batch syntax, see Processing objects over time.

     Click here to expand for PTF BQU1717 applied

    (PTF BQU1717 applied) Directs Log Master to update the old objects Repository table (ALPOLDO) or the SYSCOPY Repository table (ALPSYSCP), or both. Each set of rows in the old objects table defines a DB2 object at a given point in time. When Log Master is running in overtime mode and it encounters log records related to an object that is not currently defined in the DB2 catalog (for example, after the object has been dropped), it can use the old objects information and the SYSCOPY records in the Repository to process those log records. Log Master can use compression dictionaries stored in the Repository when it runs in current or overtime mode unless the value of the REPOS keyword on the OPTION statement is NO.

    You can update the old objects information in the Repository from the following sources:

    • The DB2 log (used by default unless you specify the NOSCAN keyword)

    • The DB2 catalog (specify the PRIME FROM DB2 CATALOG keyword)

    • An old objects data set

      The old objects data set is used only when the OVERTIME update type is specified.

      You can create an old objects data set to hold structure definitions of DB2 objects that are not currently defined in the DB2 catalog (old objects). To define the data set, use special syntax in the old objects data set and specify the name on the OPTION statement. For more information, see Old objects data set syntax and OLD OBJECTS dataSetName.

    (PTF BQU1717 applied) You can update the SYSCOPY information in the Repository from the following sources:

    • The DB2 log (used by default unless you specify the NOSCAN keyword)
    • The DB2 catalog, including SYSIBM.SYSCOPY tables (specify the PRIME FROM DB2 CATALOG keyword)
    • The information defined in the IMAGECOPY statements

    Values for REPOS UPDATE are as described in the following table. If you specify REPOS UPDATE without specifying one of these values, Log Master obtains old objects information from the DB2 log and does not include compression dictionaries or SYSCOPY information.

    REPOS UPDATE (without ONLY clause)

    Directs Log Master to update the old objects table (ALPOLDO)

    This is the default.

    NOSCANDirects Log Master not to use DB2 log as a source for old objects and SYSCOPY information for the Repository update
    PRIME FROM DB2CATALOG

    Directs Log Master to update the old objects and SYSCOPY information in the Repository with object descriptions and SYSCOPY records from the DB2 catalog

    Log Master searches for old objects information and SYSCOPY records in both the DB2 catalog and the DB2 log, unless you specify the NOSCAN keyword.

    You can store multiple versions of table objects in the Repository.

    You should run a REPOS UPDATE PRIME FROM DB2 CATALOG NOSCAN using the OVERTIME option after updating the table in the DB2 catalog and using the SYSCOPY options before you modify records out of SYSCOPY tables

    REPOS UPDATE include options

    INCLUDE DICTIONARY

    Directs Log Master to store both old objects information and copies of compression dictionaries in the old objects table of the Repository. When a valid compression dictionary is not available, Log Master can use a copy stored in the Repository to interpret log records of compressed table spaces. Log Master can obtain dictionaries from the following listed in order of preference:

    • Current table space

    • DB2 log

    • Old objects table of the Repository

    • Image copies of the table space

    For DB2 Load or Reorg actions that specify the DB2 KEEPDICTIONARY keyword, Log Master uses a record in the old objects table of the Repository as a placeholder to indicate that the corresponding compression dictionary was not rebuilt. The placeholder sets the values of REC_CNT and REC_SEQ to zero for the DB2 Load or Reorg action. Using a placeholder in this way ensures that for every rebuilt copy of a compression dictionary, Log Master stores only one dictionary object in the Repository. In addition, when using the dictionaries for decompression, Log Master loads only one copy in memory.

    INCLUDE DICTIONARY ONLYDirects Log Master to update ALPOLDO table only with compression dictionaries—no old objects in ALPOLDO and no SYSCOPY records in ALPSYSCP.
    INCLUDE SYSCOPY

    Directs Log Master to update the SYSCOPY table (ALPSYSCP) and ALPOLDO.

    INCLUDE SYSCOPY ONLYDirects Log Master to update the SYSCOPY table only.
    START range definition

    Specifies the start point when scanning for SYSCOPY records deleted from SYSIBM.SYSCOPY because of a DROP TABLESPACE or MODIFY, see Range definition.

    RBA, LRSN, DATE/TIME, and MARK are supported for REPOS UPDATE INCLUDE SYSCOPY.

    The following tables show the action that Log Master takes depending on the value of REPOS UPDATE and whether you specify an old objects data set.


    Product action determined by REPOS UPDATE value

    Syntax specificationOld Objects data setProduct actions

    REPOS UPDATE (no parameters)

    Not specified
    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from only the DB2 log

    • Might not add a row for a defined object (if the scan range does not include a CREATE or DROP for that object)

    REPOS UPDATE (no parameters)

    Specified
    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in the DB2 log that meet the criteria of the WHERE clause (use NOSCAN to prevent scan of DB2 log)

    REPOS UPDATE PRIME FROM DB2CATALOG

    Not specified
    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from both the DB2 log and the DB2 catalog

    • Adds a row for each unique CREATE and DROP RBA set which Log Master encounters for a defined object

    • When processing DB2 catalog, adds a new row to the repository if the current version of the DB2 catalog table does not match the current version of the Repository table

    REPOS UPDATE PRIME FROM DB2CATALOG

    Specified
    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in either the DB2 log or the DB2 catalog that meet the WHERE clause (use NOSCAN to prevent scan of DB2 log)

    • Adds an additional row for any new versioned object found in the DB2 catalog that meets the WHERE clause

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN

    Not specified
    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from only the DB2 catalog

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN

    Specified
    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in the DB2 catalog that meet the criteria of the WHERE clause

    Product action determined by REPOS UPDATE INCLUDE SYSCOPY value

    Syntax specification

    Product actions

    REPOS UPDATE INCLUDE SYSCOPY (no parameters)

    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from the DB2 log only, within the specified logscan range
    • Does not impact SYSCOPY log activity resulting from a DROP TABLESPACE or MODIFY

    REPOS UPDATE PRIME FROM DB2CATALOG INCLUDE SYSCOPY

    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from both the DB2 log and the DB2 catalog, within the specified logscan range
    • Does not impact SYSCOPY log activity resulting from a DROP TABLESPACE or MODIFY

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN INCLUDE SYSCOPY

    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from the DB2 catalog only, within the specified logscan range
    REPOS UPDATE INCLUDE SYSCOPY START <range definition>
    • Uses the WHERE clause to define the records that are added to the ALPSYSCP table
    • Obtains impacted SYSCOPY records from the DB2 log only, within the specified logscan range
    • Impacts SYSCOPY log activity resulting from a DROP TABLESPACE or MODIFY if within the specified START point and TO point

    Note the following points regarding the REPOS UPDATE keyword:

    • Log Master cannot update the Repository if the value of the REPOS keyword on the OPTION statement is NO.

    • To update the old objects table, SYSCOPY table, or both in the Repository, you must specify the following additional keywords:

      • Specify the DB2CATALOG keyword of the LOGSCAN statement as YES

      • Specify the EXECMODE keyword of the OPTION statement as OVERTIME or ensure that the value of the EXECMODE installation option is OVERTIME

    Note

    To keep the Repository current, run REPOS UPDATE PRIME FROM DB2 CATALOG NOSCAN with the required update type, OVERTIME or SYSCOPY, after you change old objects in the DB2 catalog or before you perform a Modify on SYSIBM.SYSCOPY.

    • When you specify REPOS UPDATE, Log Master can generate a DDL output file or a Catalog Activity report in the same log scan, but it cannot generate other forms of output, such as other reports or SQL. To generate other forms of output in the same job, include an additional, separate log scan step. You cannot include two REPOS UPDATE log scans in the same work ID.

    • Log Master only creates objects in the ALPOLDO table for compression dictionaries that are rebuilt by DB2, and adds only one copy of an unchanged compression dictionary to the table.

    • Schedule jobs that update the Repository to run before

      • Regular production processing. For example, if you run a set of jobs every week, you should run a job to update the Repository before you run the weekly processing jobs.

      • DB2 Load or Reorg actions that update compression dictionaries or that might assign table rows to different record ID (RID) values

    • If you update the Repository regularly, BMC recommends that you also run regular jobs to delete old or unusable data from the old objects table. Alternately, you can delete (or display) information from ALPOLDO by using an option on the Main Menu of the Log Master online interface.

    •  If you update the Repository regularly, BMC recommends that you also run regular jobs to delete old or unusable data from the old objects and SYSCOPY tables. Alternately, you can delete (or display) information from this tables by using an option on the Main Menu of the Log Master online interface

    • Log Master cannot update the Repository when the input source for your log scan is a logical log file generated by Log Master or another program (INPUT LLOG).

    • When you store compression dictionaries in the Repository, you should

      • Run periodic jobs that update the Repository with current dictionaries

        Be sure to update the Repository before you use a DB2 Modify utility to delete any information about selected objects from the SYSIBM.SYSCOPY table.

      • Delete any rows from the Repository that represent old compression dictionaries if you decide to stop storing dictionaries in the Repository

        If you do not delete the old dictionary rows, and you use a Modify utility to delete information from the SYSCOPY table, Log Master can use an incorrect dictionary from the Repository. For information about deleting rows from the Repository, see Maintaining the Repository.

    • You can update the old objects or SYSCOPY information in the Repository by using the Generate REPOS UPDATE JCL option on the Main Menu of the Log Master online interface.

    • DB2 does not store information about utilities executed against the SYSIBM.SYSCOPY table’s table space in SYSIBM.SYSCOPY.  This information can only be found in the log. BMC recommends including SYSIBM.SYSCOPY to their filter when running REPOS UPDATE INCLUDE SYSCOPY when updating the Log Master repository. Doing so will include the SYSCOPY information for SYSIBM.SYSCOPY found in the log during the LOGSCAN to the Log Master repository table.

    For more information about the old objects table, the Repository, and the associated batch syntax, see Processing objects over time.


     Click here to expand REPOS UPDATE description before PTF BQU1717.

    Directs Log Master to update or insert rows in the old objects table (ALPOLDO) of the Repository. Each set of rows in the old objects table defines a DB2 object at a given point in time. When Log Master is running in overtime mode and it encounters log records related to an object that is not currently defined in the DB2 catalog (for example, after the object has been dropped), it can use the old objects information and the SYSCOPY records in the Repository to process those log records. Log Master can use compression dictionaries stored in the Repository when it runs in current or overtime mode unless the value of the REPOS keyword on the OPTION statement is NO.

    You can update the old objects information in the Repository from the following sources:

    • The DB2 log (used by default unless you specify the NOSCAN keyword)

    • The DB2 catalog (specify the PRIME FROM DB2 CATALOG keyword)

    • An old objects data set
      You can create an old objects data set to hold structure definitions of DB2 objects that are not currently defined in the DB2 catalog (old objects). To define the data set, use special syntax in the old objects data set and specify the name on the OPTION statement.
      For more information, see Old objects data set syntax and OLD OBJECTS data set.

    Values for REPOS UPDATE are as follows. If you specify REPOS UPDATE without specifying one of these values, Log Master obtains old objects information from the DB2 log and does not include compression dictionaries.

    NOSCANDirects Log Master not to use the DB2 log as a source for the old objects information in the Repository.
    PRIME FROM DB2CATALOG

    Directs Log Master to update the old objects information in the Repository with object descriptions from the DB2 catalog. Log Master searches for old objects information in both the DB2 catalog and the DB2 log unless you specify the NOSCAN keyword. Consider the following:

    • You can store multiple versions of table objects in the Repository.

    • You should run a REPOS UPDATE PRIME FROM DB2 CATALOG NOSCAN after updating the table in the DB2 catalog.

    INCLUDE DICTIONARY

    Directs Log Master to store both old objects information and copies of compression dictionaries in the old objects table of the Repository. When a valid compression dictionary is not available, Log Master can use a copy stored in the Repository to interpret log records of compressed table spaces. Log Master can obtain dictionaries from the following listed in order of preference:

    • Current table space

    • DB2 log

    • Old objects table of the Repository

    • Image copies of the table space

    For DB2 Load or Reorg actions that specify the DB2 KEEPDICTIONARY keyword, Log Master uses a record in the old objects table of the Repository as a placeholder to indicate that the corresponding compression dictionary was not rebuilt. The placeholder sets the values of REC_CNT and REC_SEQ to zero for the DB2 Load or Reorg action. Using a placeholder in this way ensures that for every rebuilt copy of a compression dictionary, Log Master stores only one dictionary object in the Repository. In addition, when using the dictionaries for decompression, Log Master loads only one copy in memory.


    The following table shows the action that Log Master takes depending on the value of REPOS UPDATE and whether you specify an old objects data set.

    Product action determined by REPOS UPDATE value

    Syntax specification

    Old objects data set

    Product actions

    REPOS UPDATE (no parameters)

    Not specified
    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from only the DB2 log

    • Might not add a row for a defined object (if the scan range does not include a CREATE or DROP for that object)

    REPOS UPDATE (no parameters)

    Specified
    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in the DB2 log that meet the criteria of the WHERE clause (use NOSCAN to prevent scan of DB2 log)

    REPOS UPDATE PRIME FROM DB2CATALOG

    Not specified
    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from both the DB2 log and the DB2 catalog

    • Adds a row for each unique CREATE and DROP RBA set which Log Master encounters for a defined object

    • When processing DB2 catalog, adds a new row to the repository if the current version of the DB2 catalog table does not match the current version of the Repository table

    REPOS UPDATE PRIME FROM DB2CATALOG

    Specified
    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in either the DB2 log or the DB2 catalog that meet the WHERE clause (use NOSCAN to prevent scan of DB2 log)

    • Adds an additional row for any new versioned object found in the DB2 catalog that meets the WHERE clause

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN

    Not specified
    • Uses the WHERE clause to define the objects that are added to the old objects table

    • Obtains object information from only the DB2 catalog

    REPOS UPDATE PRIME FROM DB2CATALOG NOSCAN

    Specified
    • Adds all objects specified in the old objects data set to the old objects table

    • Adds an additional row for any objects found in the DB2 catalog that meet the criteria of the WHERE clause

    Note the following points regarding the REPOS UPDATE keyword:

    • Log Master cannot update the Repository if the value of the REPOS keyword on the OPTION statement is NO.

    • To update the old objects table in the Repository, you must specify the following additional keywords:

      • Specify the DB2CATALOG keyword of the LOGSCAN statement as YES

      • Specify the EXECMODE keyword of the OPTION statement as OVERTIME or ensure that the value of the EXECMODE installation option is OVERTIME

      Note

      (PTF BQU1148 applied) To keep the Repository current, update it after create, drop, and alter events to objects that are maintained in the Repository.

    • When you specify REPOS UPDATE, Log Master can generate a DDL output file or a Catalog Activity report in the same log scan, but it cannot generate other forms of output, such as other reports or SQL. To generate other forms of output in the same job, include an additional, separate log scan step. You cannot include two REPOS UPDATE log scans in the same work ID.

    • Log Master only creates objects in the ALPOLDO table for compression dictionaries that are rebuilt by DB2, and adds only one copy of an unchanged compression dictionary to the table.

    • Schedule jobs that update the Repository to run before

      • Regular production processing. For example, if you run a set of jobs every week, you should run a job to update the Repository before you run the weekly processing jobs.

      • DB2 Load or Reorg actions that update compression dictionaries or that might assign table rows to different record ID (RID) values

    • If you update the Repository regularly, BMC Software recommends that you also run regular jobs to delete old or unusable data from the old objects table. Alternately, you can delete (or display) information from this table by using an option on the Main Menu of the Log Master online interface.

    • Log Master cannot update the Repository when the input source for your log scan is a logical log file generated by Log Master or another program (INPUT LLOG).

    • When you store compression dictionaries in the Repository, you should

      • Run periodic jobs that update the Repository with current dictionaries

        Be sure to update the Repository before you use a DB2 Modify utility to delete any information about selected objects from the SYSIBM.SYSCOPY table.

      • Delete any rows from the Repository that represent old compression dictionaries if you decide to stop storing dictionaries in the Repository

        If you do not delete the old dictionary rows, and you use a Modify utility to delete information from the SYSCOPY table, Log Master can use an incorrect dictionary from the Repository. For information about deleting rows from the Repository, see Maintaining the Repository.

    • You can update the old objects information in the Repository by using the Generate REPOS UPDATE JCL option on the Main Menu of the Log Master online interface.

    For more information about the old objects table, the Repository, and the associated batch syntax, see Processing objects over time.

    REPOS DELETE

    Directs Log Master to delete rows from supported tables within the Repository.

    Note

    (PTF BQU2282 applied) BMC has deprecated REPOS DELETE as part of the LOGSCAN statement. For more information, see REPOS DELETE statement (PTF BQU2282 applied).

    MARKSCAN

    Associates a symbolic name (called a log mark name) with the end point of your Range definition (the TO value). Use log marks to refer to a point in the log by using a name instead of an RBA/LRSN or a date and time value. Log Master keeps information about log marks in the Repository (in a table named ALPMARK). By using the MARKSCAN keyword, you can associate a log mark name with a point in the DB2 log other than the current RBA/LRSN value when your job starts executing.

    Values for MARKSCAN are as follows:

    logMarkNameDesignates a specific log mark. A log mark name created within a given job step cannot be referenced within that job step. For more information about log marks and log mark names, see MARK logMarkName.
    DESC 'description'Use this keyword to provide a description of the log mark name. The text can be up to 65 characters long and must be enclosed in quotation marks (''). Continue text by keying through column 72 and continuing in column 1 of the next line.
    QUIESCE

    Directs Log Master to use the DB2 Quiesce utility to generate a quiesce point for the table spaces included in the log scan range. When you specify QUIESCE, Log Master overrides the TO value of your Range definition and sets the log end point to the current time. The log mark RBA/LRSN matches the quiesce RBA/LRSN used in the DB2 command. Without the QUIESCE keyword, Log Master chooses a log point that corresponds closely to the TO value.

    TABLESPACESET

    Directs Log Master to establish a quiet point for the specified table spaces, all referentially-related table spaces, and any related large object (LOB) table spaces or base table spaces that are not part of the referential integrity set.

    Note

    When using MARKSCAN QUIESCE,

    • Ensure that your WHERE clause or filter refers to at least one specific DB2 object (such as a table name or a column name). This action ensures that you define (either directly or indirectly) a set of table spaces that Log Master can pass to the DB2 Quiesce utility.

    • If the end point of a time frame is set to a log mark previously created with MARKSCAN QUIESCE, the resulting log scan contains only completed transactions.

    Sort file size parameters

    Specifies the estimated size of the data that Log Master must sort during a given log scan. You can also specify the size of the data to be sorted during an entire job on the SORTOPTS statement.

    These parameters specify either the estimated size of the data to be sorted or a technique that Log Master uses to calculate the estimate. For more information, see Sort file size parameters.

    DB2CATALOG

    Determines whether Log Master includes DB2 catalog information from log records in the generated output.

    NODirects Log Master to exclude DB2 catalog information as it generates output.
    YESDirects Log Master to include DB2 catalog information as it generates output. You must set DB2CATALOG to YES to update the Repository (REPOS UPDATE), to generate a Catalog Activity report or a command output file, or to generate data definition language (DDL) statements.

    The DB2CAT installation option affects the action of the DB2CATALOG keyword. The person installing Log Master can set DB2CAT to NEVER, which prevents Log Master from including any DB2 catalog information in the generated output.

    If your job operates on objects stored in the DB2 catalog, Log Master must perform completion processing on DB2 catalog log records. Because of the large number of log records related to the catalog, this processing can cause your job to run longer and require more resources than a job that does not read the DB2 catalog. If you frequently operate on objects stored in the catalog, you can improve performance by creating more frequent image copies of the catalog or by defining tables in the DB2 catalog with Data Capture Changes (DCC).

    DB2DIRECTORY

    Determines whether Log Master includes DB2 directory information from log records in the generated output.

    DB2 directory information is limited to Quiet Point and Summary All Activity report outputs.

    NODirects Log Master to exclude DB2 directory information as it generates output.
    YES

    Directs Log Master to include DB2 directory information as it generates output.

    The DB2DIR installation option affects the action of the DB2DIRECTORY keyword. The person installing Log Master can set DB2DIR to NEVER, which prevents Log Master from including any DB2 directory information in the generated output.

    Scan range definition

    Specifies the part of the DB2 log that Log Master scans for information. The scan range definition can also specify a start point where Log Master begins generating REDO information (often called a recovery point). The scan range definition is not required when you specify the REPOS DELETE keyword.

    For more information about specifying the range of the log scan, see LOGSCAN scan range definition.

    FILTER

    Specifies an existing filter that Log Master uses to select log records from within the log scan range. Normally, you use the online interface to create filters. Log Master stores them in the Repository. If you use the task-oriented dialogs of the online interface, the filter name defaults to be the same as the work ID name.

    userID.filterName

    The filterName can be up to 18 characters long. The first character cannot be numeric, and any additional characters must be alphanumeric characters or national characters ($, #, or @).

    Use the FILTER keyword to refer to a stored filter that already exists. To define new selection criteria within the current job, use the WHERE clause of the LOGSCAN statement.

    You can qualify filterName with a user ID. If you do not specify a user ID, the default ID is the TSO prefix or TSO user ID that submitted the job. For more information about creating filters, see the section about defining the log scan step in the Log Master for DB2 documentation .

    WHERE Search condition definition

    Defines the search conditions that Log Master uses to select log records from within the log scan range. The conditions specified in a WHERE clause relate to database structures or characteristics, as opposed to a scan range that relates to part of the log.

    Use the WHERE clause to define new selection criteria within the current job. To use a stored filter that already exists, use the FILTER keyword of the LOGSCAN statement. For more information about specifying a search condition, see LOGSCAN search condition definition.

    ONGOING HANDLE handleID

    Defines a unique handle ID that Log Master uses to identify this log scan as an ongoing process (a log scan that is run repeatedly, with each start point dependent on the end of the previous log scan). For more information, see Ongoing log scans.

    Log Master associates the different runs of an ongoing log scan by using the handle ID (along with the combination of the work ID and the user ID specified in a job’s WORKID statement). Use the online interface to create a work ID that contains an ongoing log scan. When you do, the online interface ensures that the Repository contains all necessary information and generates a unique combination of handle ID, work ID and user ID.

    To rerun a job that contains an ongoing log scan, use either the USE or RERUN keywords. Both keywords can use the run sequence number of a previous run of an ongoing job. Log Master displays the run sequence number for a job in message BMC097494.

    USE RUNSEQ runseq | LASTRUN

    USE scans the log from the start point of a previous run that you specify with a run sequence number and continues to the end point that you specify in your Range definition (such as TO CURRENT). Use either a run sequence number or the LASTRUN keyword to specify a previous run. Log Master includes any transactions that are open at the beginning of the log scan.

    When you specify USE LASTRUN, Log Master scans log from the start point of the most recent run that completed successfully and continues to the end point or limit that you specify in your Range definition. The following diagram shows that by entering USE with the runseq value from RUN 2, the scan range includes the open transactions represented by Open Transactions (1) and Open Transactions (2).


    USE RUNSEQ with the runSeq flow diagram

    RERUN RUNSEQ runSeq | LASTRUN

    RERUN scans the log from the start point of a previous run that you specify with a run sequence number and continues to the end point of that run. Specify the previous run by using either a run sequence number or the LASTRUN keyword. Log Master includes any transactions open at the start of the range, but excludes transactions that are open at the end of the range.

    The following figure shows that by entering RERUN RUNSEQ with the runSeq value from RUN 2, the scan range includes the open transactions represented by Open Transactions (1), but not the open transactions represented by Open Transactions (2).

    RERUN RUNSEQ with the runSeq flow diagram

    When you specify RERUN LASTRUN, Log Master scans the log from the start point of the most recent run that completed successfully and continues to the end point of that run.

    RESET

    Directs Log Master to establish a new start point for an ongoing log scan. Normally, Log Master adjusts the start point of an ongoing run based on the end point of the previous run (for more information, see Ongoing log scans. Use this keyword to prevent Log Master from adjusting the start point (for example, if DB2 log files for the period between the previous run and the current time are corrupted or unavailable).

    When you specify RESET:

    • Log Master uses the start point that you specify in the Range Definition (the FROM keyword)

    • Log Master ignores any information stored in the Repository about open transactions from previous runs of the specified ongoing log scan (information remains in the Repository unless you specify the PURGE keyword)

    • Log Master does not reset the run sequence number (the run sequence number of the current log scan is one more than the number of the previous run)

    • You cannot specify the USE or RERUN keywords for the same run

    The PURGE keyword causes Log Master to delete information related to the current ongoing log scan from the Log Master Repository (based on the combination of the user ID, work ID, and ongoing handle ID). Log Master does not reset the run sequence number. You must specify PURGE if your new end point is before the end point of the previous ongoing run.

    Be aware of the following points regarding USE RUNSEQ or RERUN RUNSEQ:

    • If you specify the RERUN keyword, do not specify the QUIESCE value of the MARKSCAN keyword. This value sets the end point to the current log location. With the RERUN keyword, your end point is normally not at the current time.

    • If you specify the RERUN keyword, ignores the specified limit.

    • If you specify the USE keyword to re-use the start point of a previous log scan, the end point defined by the OR LIMIT keyword must be after the end point of the previous log scan.

    • To honor the USE and RERUN keywords, Log Master obtains the start point of your specified run from the Repository by reading the end point of the run before your specified run. Because of this behavior, use caution when you specify these keywords again for subsequent log scans. In rare situations, Log Master can use a range for the log scan that you do not expect, or encounter errors. The following table shows an example:

    Example of possible errors with USE RUNSEQ or RERUN RUNSEQ

    Run sequence number

    Ongoing syntax

    Starting RBA/LRSN

    Adjusted scan range (RBA/LRSN)

    Result

    01

    FROM 050 TO CURRENT

    100

    050 - 100

    Normal

    02

    FROM 050 TO CURRENT

    200

    100 - 200

    Normal

    03

    FROM 050 TO CURRENT

    300

    200 - 300

    Normal

    04

    FROM 050 TO CURRENT

    USE RUNSEQ 02

    400

    100 - 400

    Normal (re-scan of previous runs)

    05

    FROM 050 TO CURRENT

    RERUN RUNSEQ 03

    500

    200 - 300

    Normal (re-scan of run 03)

    06

    FROM 050 TO CURRENT

    RERUN RUNSEQ 05

    600

    400 - 300

    Error (invalid scan range)

    Ongoing log scans

    Log Master provides the ability to define an ongoing log scan (one that is run repeatedly, with each start point dependent on the end of the previous log scan).

    With this feature, you can scan the logs multiple times, changing the range of the log scan for each run, but not changing the SYSIN syntax of your job step. In many environments, ongoing log scans extract data for migration to other databases or platforms.

    For an ongoing log scan, Log Master keeps track of each run. Log Master also keeps track of any transactions (units of recovery) that start within the current log scan, but are still open at the end point. At the start of each run, Log Master automatically changes the start point so that it begins at the lowest RBA/LRSN address in the set of recorded open transactions from the previous run of the log scan. Log Master issues message BMC097778 to display any transactions (URIDs) that were open at the end of the previous run.

    For ongoing log scans, Log Master ensures that

    • any transactions that are open at the end of the current log scan will be included in a subsequent run of the log scan

    • any transactions that were completed within a previous log scan are not processed twice, even though Log Master might scan part of the same log range again

    When you work with ongoing log scans, remember the following points:

    • Use the online interface to create a work ID that contains an ongoing log scan. The online interface ensures that an ongoing job’s combination of handle ID, work ID name, and user ID is unique. If you define your own handle ID outside of the online interface, ensure that you define a unique combination of these three items. (Log Master associates the different runs of an ongoing log scan by using the handle ID, along with the combination of the work ID name and the user ID specified in a job’s WORKID statement).

    • When possible, schedule ongoing log scans to run before running any utilities that can change the location of rows within a table space. (For example, a DB2 Reorg utility can change the location of a row, so that the row associated with a record ID (RID) value is different than the row associated with that RID value in previous log records). Events that change the location of rows limit the Log Master options for row completion processing. When the log scan does not include such an event, Log Master is more likely to select an option for row completion processing that results in better product performance.

    • When possible, schedule ongoing log scans to run before executing any data definition language (DDL) statements that change the structure of a table (for example, an ALTER COLUMN SET DATA TYPE statement) and any subsequent database reorganizations. This action enables Log Master to avoid scanning log records for multiple versions of a DB2 table. When a log scan includes multiple versions of a table, Log Master performs additional processing to keep track of the versions and convert log records from previous versions to the current version of the table. Avoiding this additional processing can improve product performance.

    • You can reset the start point of an ongoing log scan (for more information, see ONGOING HANDLE handleID ).

    • Log Master does not support ongoing log scans with an input source of logical log files.

    • Log Master performs row completion processing differently for a work ID that contains an ongoing log scan. When an ongoing log scan requires access to the current table space, but updates to the table space have not yet been written to DASD storage, Log Master can defer completing some log records during the current log scan so that they can be completed in a subsequent log scan.

    • Log Master responds to return codes greater than 8 (error) differently for a work ID that contains an ongoing log scan. For more information, see Log Master termination processing.

    • Consider using ongoing log scans to periodically update the Log Master Repository with compression dictionaries (to avoid mounting image copies), or with structure definitions for old objects (to enable processing in overtime mode).

    This section contains the following topics:


    Was this page helpful? Yes No Submitting... Thank you

    Comments