Space announcements

   

This space provides the same content as before, but the organization of the home page has changed. The content is now organized based on logical branches instead of legacy book titles. We hope that the new structure will help you quickly find the content that you need.

SQL type/output definition

This topic describes the SQL type/output definition syntax of the LOGSCAN statement.

This syntax specifies the type of SQL statements included in and the location of the output SQL file that Log Master creates using the log records selected in your log scan.

The following figure shows the SQL type/output definition syntax.


OptionDescription

MIGRATE

Directs Log Master to generate SQL used for migration, duplicating the SQL that was originally executed. The generated SQL reflects the database changes contained in the log records specified by your WHERE clause or filter. For example, if the original SQL contains an INSERT statement, the MIGRATE SQL contains the same statement.

UNDO

Directs Log Master to generate UNDO SQL, reversing the statement type of the SQL that was originally executed. The generated SQL reverses the database changes contained in the log records specified by your WHERE clause or filter. For example, if the original SQL contains an INSERT statement, the UNDO SQL contains a corresponding DELETE statement.

Log Master cannot reverse the database changes that result from a mass delete action performed on a table defined with Data Capture None (DCN) if the table is stored in a segmented or universal table space. (For more information, see Considerations for output files and SQL.)

ValueDescription
EXECUTEDirects Log Master to execute the generated UNDO SQL immediately after it is generated, without any review or intervention. This keyword is most frequently used in test environments where any unintended consequences do not affect important production data.

REDO

Directs Log Master to generate REDO SQL, so that you can reapply a set of transactions after a recovery action, while omitting a set of 'problem' transactions defined by your Range definition and WHERE clause. For example, assume some changes have been made to a set of table spaces in error. You can

  1. Define the problem changes with a Range definition and WHERE clause

  2. Generate REDO SQL

  3. Recover the set of table spaces to a point in time before the problem changes

  4. Execute the REDO SQL and reapply all of the changes after that point in time, except the problem changes.

The following diagram shows how Log Master uses the Range definition and WHERE clause to generate REDO SQL.

For REDO SQL, Log Master starts by selecting all of the log records that are

  • Within the table spaces defined (directly or indirectly) by your WHERE clause

    To generate REDO SQL, your WHERE clause must refer to at least one specific DB2 object, such as a table space, table, or column.

  • Within the period between the REDO recovery point and the current time

    To generate REDO SQL, you must define a REDO recovery point. For more information, see REDO FROM.


From this set of log records, Log Master excludes all of the log records

  • Defined by your WHERE clause

  • Defined in your Range definition (the FROM and TO values)

Note

You should understand the difference between MIGRATE and REDO SQL. Both types duplicate the original SQL. If the original SQL contains an INSERT statement, both MIGRATE and REDO SQL contain duplicate INSERT statements. The differences are as follows:

  • MIGRATE SQL contains all of the changes within the applicable table spaces that satisfy the WHERE clause; REDO SQL contains all of the changes that do not satisfy the WHERE clause.

  • MIGRATE SQL contains all changes within your Range definition; REDO SQL contains all of the changes from the REDO recovery point to the current time that are not within your Range definition.

BMC Software strongly recommends that you generate REDO SQL before you perform the recovery. You might be able to generate the SQL after the recovery by using the PROCESS PITS keyword of the OPTION statement, but this practice is not recommended. For more information, see PROCESS PITS.

Also remember that when you generate REDO SQL, Log Master extends the end point of the log scan to the current date and time. This action causes Log Master to process more log records than you might expect.

DATASET Output definition

Specifies the characteristics of the output data set that contains the generated SQL. For more information, see LOGSCAN output definition.

TEMPLATE Output definition

Specifies the characteristics for the output data set that contains the template information for the generated SQL. For more information, see LOGSCAN output definition.

The template file is optional, but BMC Software recommends that you create it if you intend to execute the generated SQL with Log Master (either by using an EXECSQL statement or by using a separate job or job step to run the High-speed Apply Engine). The template file

  • Improves performance when the High-speed Apply executes the generated SQL

  • Contains one entry for each distinct statement type in the generated SQL

  • Provides information that High-speed Apply uses to generate a static plan for the SQL statements

  • Can be used with a version of the High-speed Apply Engine against a DB2 for LUW target on distributed systems platforms (download both the SQL file and the template file, edit the first line of the SQL file, replace the previous name with the fully qualified path name of the template file, and execute the SQL file with High-speed Apply)

For more information about the High-speed Apply Engine that is distributed with Log Master, see the Log Master for DB2 documentation .

RECFM

Determines the record format of the output SQL file.

ValueDescription
FB(default) Indicates that records are fixed length and blocked.
VB

Indicates that records are variable length and blocked. This value allows individual SQL statements to be written on a single record, regardless of their length. This type of file can be easier to process on other platforms. When you specify this value, the following items apply:

  • Log Master ignores the presence of the VERBOSE keyword.

  • Dynamic SQL processors (SPUFI or DSNTEP2) might not process the file.

  • Log Master sets the logical record length of the output file (LRECL) to the system-determined block size (SDB) of the storage device where the output file is written (minus 4 bytes).

INCLUDE DDL

Indicates whether Log Master generates data definition language (DDL) statements and includes them in the same output as the generated SQL. Log Master generates DDL to match the SQL type (MIGRATE or UNDO) that you requested. Log Master does not generate REDO DDL.

ValueDescription
YESLog Master includes DDL statements in the generated SQL.
NOLog Master does not include DDL statements.

If you omit the INCLUDE DDL keyword, the default value is NO (Log Master does not include DDL statements). If you specify the INCLUDE DDL keyword, but omit the YES or NO keywords, the default value is YES (DDL statements are included).

INCLUDE RI

Indicates whether Log Master is to include log records that reflect changes resulting from referential integrity (RI) constraints when generating SQL output. This keyword does not cause Log Master to add tables to the filter; it only determines whether Log Master includes RI changes (in addition to normal changes) for the tables that are selected by the filter. To include a complete set of RI changes, you must specify all of the tables involved in the RI constraints in your WHERE clause or filter.

Note

Correctly setting this parameter can be complicated. For more information, see the discussion of working with changes resulting from referential integrity in the Log Master for DB2 documentation .

The default value for this keyword depends on the type of SQL that you are generating:

SQL type

Default value

Explanation

REDO

NO

Normally, you do not need RI updates when you use REDO SQL to reapply transactions after a recovery action.

MIGRATE

NO

Whether to include RI updates depends on the definition on the target system.

UNDO

YES

Normally, you need RI updates when you use UNDO SQL to back out transactions.

ValueDescription
YESLog Master includes changes that result from referential integrity constraints.
NOLog Master does not include referential integrity changes. Setting this keyword to NO causes Log Master to exclude referential integrity changes. If you have specified any additional tables in your log scan as mentioned in the preceding paragraphs, Log Master still includes any changes that do not result from referential integrity constraints.

Considerations for migration:

This keyword can be important when you migrate data from a source database environment to a target environment. During migration, your decision to include RI changes depends on how the RI constraints are defined in the target environment, as follows:

Are the RI constraints the same in the target and source environments?

INCLUDE RI value

Explanation

Yes

NO

You probably do not want to include the additional RI changes because the constraints can cause duplicate changes.

No

YES

You might want to include the RI changes. If so, include all of the affected tables in the WHERE clause or filter.

INCLUDE TRIGGER

Indicates whether Log Master includes any log records in the generated SQL that reflect changes that result from activity defined within a trigger. The activity can be performed directly within the trigger or by other processes nested beneath the trigger (for example, in a stored procedure called from within a trigger).

ValueDescription
YES(default) Log Master includes changes that result from trigger activity. For consistency with earlier versions of the product.
NOLog Master does not include trigger changes. Setting this keyword to NO causes Log Master to exclude trigger changes. If you have specified any additional tables in your log scan, Log Master still includes any changes to those tables that do not result from trigger activity.

This keyword does not cause Log Master to search additional tables for changes resulting from triggers, it only determines whether Log Master includes the trigger changes that exist in the log records selected by your log scan. To include a complete set of trigger changes, you must specify all of the tables affected by trigger activity in your WHERE clause or filter.

Use this keyword when a target database environment defines the same triggers and relationships as the source environment. In this situation, specify NO. Log Master excludes any trigger records from the output file. When the generated output is processed in the target environment, the triggers are activated as they were in the source environment, and you avoid any duplication of database changes. Before you generate and execute MIGRATE SQL, consider the triggers and other constraints that are defined in the target environment carefully.

To select log records resulting from trigger activity, a log scan’s input files must be either

  • Generated by DB2 (for DB2 log files, from the BSDS or specified by INPUT DB2LOG)

  • Generated by a Log Master version of 7.3.00 or later running against DB2 (for input logical log files)

INCLUDE ROLLBACK

Determines whether Log Master includes information from compensated log records in the output SQL file. Use the default value of NO for normal processing. In this context, compensated log records are the log records of database changes that DB2 subsequently 'compensates for' or 'reverses' (for example, log records of changes that are later reversed by a ROLLBACK statement). Depending on the value of this keyword, Log Master includes or excludes compensated log records. For more information, see INCLUDE ROLLBACK in LOGSCAN report definition.

ValueDescription
NODirects Log Master to exclude compensated log records. The output SQL file does not reflect any compensated log records. Use this default value for normal processing.
YESDirects Log Master to include compensated log records. The output SQL file reflects both compensated log records and noncompensated log records.
ONLYDirects Log Master to include only compensated log records. The output SQL file does not reflect any noncompensated log records.

Warning

Exercise caution before specifying a value of YES or ONLY. In most circumstances, when you include compensated log records, your output SQL file includes data that has not been applied to the original DB2 database. For example, if you migrate data from one database to another and you use an output SQL file that reflects compensated log records, you can corrupt the data in your target database.

If you do not specify INCLUDE ROLLBACK, Log Master ignores compensated log records. If you specify INCLUDE ROLLBACK without specifying YES or NO, the default value is YES (Log Master includes both compensated and noncompensated log records).

INCLUDE LOBS

Determines whether Log Master includes the data from large object (LOB) columns in the output SQL file. When the value of this keyword is YES, Log Master writes temporary LOB VSAM files during processing, but does not save them permanently to disk unless the log scan includes an output logical log file.

Log Master generates one LOB VSAM file for each LOB column (or each partition of a LOB column) that occurs in the selected log records. Log Master allocates an additional LOB VSAM file only when it has filled the initial data set and all possible extents, but more LOB column data remains to be written. Depending on your environment and your LOB data, the additional disk I/O required to process LOB VSAM files can slow Log Master performance.

Log Master includes the LOB data as part of the generated SQL statements (by using character representation of hexadecimal values, for example x'407AC2' ). If the length of a statement exceeds the limit imposed by your version of DB2, Log Master issues a warning message, omits the statement, and continues processing.

Log Master cannot include LOB column data in the UNDO SQL that is generated for delete and some insert actions. For more information, see Large volume columns and SQL.

For more information about how Log Master processes LOB columns and data, see the Log Master for DB2 documentation .

ValueDescription
YESDirects Log Master to include LOB data in the output SQL file. Log Master writes temporary LOB VSAM files and includes LOB data in the generated SQL statements.
NODirects Log Master to avoid including LOB data in the output SQL file.
INLINEDirects Log Master to include only inline LOB data in the output SQL file, ignoring the auxiliary LOB tables. Log Master writes permanent LOB VSAM files and includes LOB data in the generated SQL statements.

If you do not specify INCLUDE LOBS, Log Master does not include LOB data. If you specify INCLUDE LOBS without specifying YES, NO, or INLINE, the default value is YES (Log Master includes LOB data).

INCLUDE XML

Determines whether Log Master includes the data from XML columns in the output SQL file. When the value of this keyword is YES, Log Master writes temporary XML VSAM files during processing, but does not save them permanently to disk unless the log scan includes an output logical log file.

Log Master generates one XML VSAM file for each XML column (or each partition of an XML column) that occurs in the selected log records. Log Master allocates an additional XML VSAM file only when it has filled the initial data set and all possible extents, but more XML column data remains to be written. Depending on your environment and your XML data, the additional disk I/O required to process XML VSAM files can slow Log Master performance.

Log Master includes the XML data as part of the generated SQL statements. If the length of a statement exceeds the limit imposed by your version of DB2, Log Master issues a warning message, omits the statement, and continues processing.

ValueDescription
YESDirects Log Master to include XML data in the output SQL file. Log Master writes temporary XML VSAM files and includes XML data in the generated SQL statements.
NODirects Log Master to avoid including XML data in the output SQL file.

If you do not specify INCLUDE XML, Log Master does not include XML data. If you specify INCLUDE XML without specifying YES or NO, the default value is YES (Log Master includes XML data).

INCLUDE HISTORY

Indicates whether Log Master includes changes to history tables that are associated with system-maintained temporal tables in the generated SQL. This keyword does not cause Log Master to search additional tables for changes; it only determines whether Log Master includes the history changes that exist in the log records that are selected by your log scan.

DB2 does not have a flag in the history log record to indicate whether the record was logged as a result of activity performed on the associated system-maintained table, or as a result of DML activities performed on the history table by a user. Therefore, Log Master includes all history records in the generated SQL when the value of this keyword is YES, and the history tables are explicitly or implicitly specified in the WHERE clause or filter.

When you generate MIGRATE or REDO SQL, the default value for this keyword is NO. When you generate UNDO SQL, the default value is YES.

ValueDescription
YESLog Master includes all changes to the history tables in the output SQL file. You must also specify the history tables in the WHERE clause or filter.
NOLog Master does not include changes to history tables in the output SQL file, even if you specify the history tables in the WHERE clause or filter.

Considerations for including history changes

When you work with system-maintained temporal tables, be aware of the following points:

  • Because the SYSTEM_TIME start and end columns are defined as GENERATED ALWAYS, any SQL output that includes activities that were performed on the system-maintained table might not be executable. To exclude the GENERATED ALWAYS columns from the SQL output, use the EXCLUDE COLUMNS clause in the log scan statement. For more information, see LOGSCAN column include/exclude definition.

  • When you generate MIGRATE SQL, if you include history changes, the SQL might not be executable, or might not keep the system-maintained and the history tables in sync. For instance, when applying the generated SQL, the versioning relationship might result in duplicate INSERTS to the history table. In addition, DB2 generates new timestamp values for the GENERATED ALWAYS columns when the migrated SQL is applied.

  • When you perform an UNDO or drop recovery, unpredictable results can occur if you attempt to recover the base table and its history table as a set, and also attempt to maintain history information.

  • The INCLUDE HISTORY keyword can be important when you migrate data from a source database environment to a target environment. During migration, your decision to include history table changes depends on how the tables are defined in the target environment:

    • If the history tables are the same in the target environment as in the source environment, you probably do not want to include the additional history changes because the constraints can cause duplicate changes. Set INCLUDE HISTORY to NO.

    • If the history tables in the target environment are not set up the same as the source environment, you might want to include the history tables. If so, include all of the affected tables in the WHERE clause or filter, and set INCLUDE HISTORY to YES.

      Note

      You might need to disable the versioning relationship between the target system-maintained and history tables. Doing so ensures that, when updates are applied to the base table, the history table does not record the wrong history information. After the migration is complete, enable versioning again.

  • The INCLUDE HISTORY keyword is not valid for report and logical log outputs; however, if the filter includes a history table, Log Master includes all history records in the output.





Was this page helpful? Yes No Submitting... Thank you

Comments