Migrating databases


The database toolkit is a command line tool created to leverage specific operations applied to the BMC Client Management database. 

As a database administrator, use the database toolkit with a dedicated command-line tool to manage database operations. For instance, the toolkit’s copy command facilitates migrating a source database to a destination, such as transferring a Client Management database from an Oracle to a PostgreSQL instance.

Important

This tool should only be used for the BMC Client Management database, as it is designed to perform operations specific to the BCM database format and requirements. It should not be used to apply commands to unrelated databases.

The tool offers two different commands:

The copy command connects to the two different databases, collects the required information, performs a comparison to determine the operations to be executed on the destination database, including the following:

  • The destination database cleanup.
  • The destination database initialization using the BMC Client Management SQL scripts.
  • The destination database update to create dynamic objects, i.e. objects not created by the SQL scripts.
  • The destination database is populated with rows value copy.
     

Important

An event filter mechanism is designed to drive the execution of the copy command. This mechanism defines the expected behavior when specific events occur, such as MISSING_TABLE or CREATE_COLUMN. The mechanism also helps identify the correct names, as database systems often alter object names. Oracle, for example, defaults to uppercase for all names.

Click here to see the list of command help outputs.


To copy a database

The copy database command is designed to assist customers in migrating to a different database system. For example, customers using an Oracle database may want to switch to PostgreSQL. This command facilitates the transfer, allowing the environment to operate with the new database system.

The command first connects to the source database to gather relevant information and then connects to the destination database to collect data there as well. Using the gathered information, the command calculates and executes the necessary operations. The collected data include:

  • The default tablespace for the database.
  • The tablespaces in the database.

    Important

    Tablespaces may be shared across multiple databases. Therefore, the command will only collect the tablespaces related to the BMC Client Management database.

  • The tables.
  • The columns for the collected tables, along with their data type.
  • The primary key constraints.
  • The foreign key constraints.
  • The indexes.
  • The functions.
  • The views.

The collected data is categorized into two datasets:

  • Static objects: Static objects are those created by the default BMC Client Management database creation script.
  • Dynamic objects: The system generates dynamic objects throughout the product lifecycle.

Example

The devices table is a static object created by the default SQL script, while various inventory tables are dynamically created based on the system configuration. As a result, all BMC Client Management environments will share the static objects, but there may be variations in the dynamically created objects.

After having connected the two databases and collected information for each, the command first verifies the source database includes all the static objects. The static objects, created by the default SQL script, must be present in the source database. The command then checks if the destination database needs initialization. Initialization is required if any static object is missing or incompatible with the expected format.

For example, a table might require a text-based column, but the column may be defined as numeric instead. 

If all static objects are missing, a simple initialization is enough. However, if any object is incompatible, a cleanup of the destination database is needed first. This cleanup removes all existing static objects to allow the initialization script to run properly.

For example, if the Devices table needs to be created but already exists, it will be removed during the cleanup.

Once the destination database is initialized (with or without a preliminary cleanup), the command identifies the dynamic objects to create, including tables, columns, constraints, and indexes. After the final objects are created in the destination database, the two databases will be compatible, allowing the rows to be copied from the source to the destination. This final step is the destination database population, where all rows for both static and dynamic tables are copied from the source to the destination.

By default, the command does nothing unless a specific command line switch is provided. To cleanup, initialize, update, and populate the destination database, use the -clear, -init, -update, and -populate command line parameters. If these switches are not supplied, the command notifies you that operations need to be executed and specifies which command line parameter is required to proceed. Some operations may consume time and displays the progress as percentages.

Additionally, the row population is incremental, so you can stop the database copy during this step. The next execution of the command will resume from the table that was being copied when the process was stopped.


Tablespace

The database copy command may need special configuration regarding the tablespaces. By default, the default SQL script used to initialize the destination database will use standard tablespace names having the BCM_ prefix.

There are different use cases:

  • The destination database can keep these default tablespace names.
  • The destination database must use different tablespace names.
  • The source database is using the default tablespace names.
  • The source database is using different tablespace names.

All these situations can be handled using tablespaces settings. The command line parameter -output-tablespaces-path can be used to write the default settings on disk. Once modified, the command line parameter -input-tablespaces-path can be used to instruct the command on how to handle the tablespaces during the database copy.

The tablespace settings are defined in a JSON file format, which consists of two sections:

Section

Details

Custom

Click here to see more details.

The custom section defines the tablespace names to be used when initializing the destination database. By default, the filter indicates that the destination database should use the default tablespace names:

"custom" : {
     "BCM_DATA" : "BCM_DATA",
     "BCM_INV" : "BCM_INV",
     "BCM_INDEX" : "BCM_INDEX",
     "BCM_WQ" : "BCM_WQ",
     "BCM_EL" : "BCM_EL",
     "BCM_INVINDEX" : "BCM_INVINDEX",
     "BCM_PATCH" : "BCM_PATCH",
     "BCM_PATCHINDEX" : "BCM_PATCHINDEX",
     "BCM_ELINDEX" : "BCM_ELINDEX",
     "BCM_WQINDEX" : "BCM_WQINDEX",
     "BCM_DELTAINV" : "BCM_DELTAINV",
     "BCM_DELTAINDEX" : "BCM_DELTAINDEX",
     "BCM_VM" : "BCM_VM",
     "BCM_VMINDEX" : "BCM_VMINDEX",
     "BCM_ESIDKB" : "BCM_ESIDKB",
     "BCM_ESIDKBINDEX" : "BCM_ESIDKBINDEX",
     "BCM_ESIDDATA" : "BCM_ESIDDATA",
     "BCM_ESIDDATAINDEX" : "BCM_ESIDDATAINDEX",
     "BCM_SCAP" : "BCM_SCAP",
     "BCM_SCAPINDEX" : "BCM_SCAPINDEX"
  }

With the configuration shown above, the default SQL script will retain the BCM_ prefixed tablespace names when initializing the destination database. If the tablespace names need to be changed, the custom section must be updated accordingly:

"custom" : {
     "BCM_DATA" : "AMP_DATA",
     "BCM_INV" : "AMP_INV",
     "BCM_INDEX" : "AMP_INDEX",
     "BCM_WQ" : "AMP_WQ",
     "BCM_EL" : "AMP_EL",
     "BCM_INVINDEX" : "AMP_INVINDEX",
     "BCM_PATCH" : "AMP_PATCH",
     "BCM_PATCHINDEX" : "AMP_PATCHINDEX",
     "BCM_ELINDEX" : "AMP_ELINDEX",
     "BCM_WQINDEX" : "AMP_WQINDEX",
     "BCM_DELTAINV" : "AMP_DELTAINV",
     "BCM_DELTAINDEX" : "AMP_DELTAINDEX",
     "BCM_VM" : "AMP_VM",
     "BCM_VMINDEX" : "AMP_VMINDEX",
     "BCM_ESIDKB" : "AMP_ESIDKB",
     "BCM_ESIDKBINDEX" : "AMP_ESIDKBINDEX",
     "BCM_ESIDDATA" : "AMP_ESIDDATA",
     "BCM_ESIDDATAINDEX" : "AMP_ESIDDATAINDEX",
     "BCM_SCAP" : "AMP_SCAP",
     "BCM_SCAPINDEX" : "AMP_SCAPINDEX"
  }

With the configuration above, all tablespaces are set up. If the default SQL script is used to initialize the destination database, tablespace names with the AMP_ prefix will be applied instead. Each entry can be customized independently.

Rename

Click here to see more details.

The rename section applies to the source database and allows you to rename the source database tablespace names before the copy process. By default, the rename section is empty, meaning the destination database will use the same tablespace names as the source database. For example, if the source database has a BCM_DATA tablespace, the destination database will also use BCM_DATA. If you need to rename the source tablespaces for the destination database, you can configure the rename section to assign different names.

"rename" : {
     "BCM_DATA" : "AMP_DATA",
     "BCM_INV" : "AMP_INV",
     "BCM_INDEX" : "AMP_INDEX",
     "BCM_WQ" : "AMP_WQ",
     "BCM_EL" : "AMP_EL",
     "BCM_INVINDEX" : "AMP_INVINDEX",
     "BCM_PATCH" : "AMP_PATCH",
     "BCM_PATCHINDEX" : "AMP_PATCHINDEX",
     "BCM_ELINDEX" : "AMP_ELINDEX",
     "BCM_WQINDEX" : "AMP_WQINDEX",
     "BCM_DELTAINV" : "AMP_DELTAINV",
     "BCM_DELTAINDEX" : "AMP_DELTAINDEX",
     "BCM_VM" : "AMP_VM",
     "BCM_VMINDEX" : "AMP_VMINDEX",
     "BCM_ESIDKB" : "AMP_ESIDKB",
     "BCM_ESIDKBINDEX" : "AMP_ESIDKBINDEX",
     "BCM_ESIDDATA" : "AMP_ESIDDATA",
     "BCM_ESIDDATAINDEX" : "AMP_ESIDDATAINDEX",
     "BCM_SCAP" : "AMP_SCAP",
     "BCM_SCAPINDEX" : "AMP_SCAPINDEX"
  }

With the configuration above, the command will rename tablespaces. For example, the BCM_SCAP tablespace from the source database will be renamed to AMP_SCAP in the destination database. The rename section also allows you to restore the default tablespace names.

The following configuration assumes the source database uses legacy tablespace names, but the destination database will use the modern names instead:

"rename" : {
     "AMP_DATA" : "BCM_DATA",
     "AMP_INV" : "BCM_INV",
     "AMP_INDEX" : "BCM_INDEX",
     "AMP_WQ" : "BCM_WQ",
     "AMP_EL" : "BCM_EL",
     "AMP_INVINDEX" : "BCM_INVINDEX",
     "AMP_PATCH" : "BCM_PATCH",
     "AMP_PATCHINDEX" : "BCM_PATCHINDEX",
     "AMP_ELINDEX" : "BCM_ELINDEX",
     "AMP_WQINDEX" : "BCM_WQINDEX",
     "AMP_DELTAINV" : "BCM_DELTAINV",
     "AMP_DELTAINDEX" : "BCM_DELTAINDEX",
     "AMP_VM" : "BCM_VM",
     "AMP_VMINDEX" : "BCM_VMINDEX",
     "AMP_ESIDKB" : "BCM_ESIDKB",
     "AMP_ESIDKBINDEX" : "BCM_ESIDKBINDEX",
     "AMP_ESIDDATA" : "BCM_ESIDDATA",
     "AMP_ESIDDATAINDEX" : "BCM_ESIDDATAINDEX",
     "AMP_SCAP" : "BCM_SCAP",
     "AMP_SCAPINDEX" : "BCM_SCAPINDEX"
  }

By using these custom and rename sections, all the different use cases can be handled.

Filters

The database copy operation is driven by filters, specifically during the update operation. While creating objects from the default SQL script is straightforward, the update operation is responsible for creating the dynamic objects. Filters are used to make decisions during this operation and follow an event-driven model. The command triggers events during execution whenever specific conditions are met, and the events defined in the filters are used to resolve each situation.

For example, when dynamic objects are collected from the source database, the filters will determine whether these objects need to be created and copied to the destination database or if they can be ignored. If they need to be processed, the actual object names are retrieved from the filters. This is necessary because different database systems have different naming conventions. By default, SQL Server keeps the names as they are, while Oracle converts them to uppercase and PostgreSQL to lowercase. When copying objects between database systems, it is essential to retrieve their original names, and the filters help facilitate this process.

The filters support the following event types:

Event type

Description

MISSING_TABLE

Click here to see more details.

This event is triggered when a dynamic table (one not created by the default SQL script) is collected from the source database but not found in the destination database. The goal is to determine whether the table should be created and copied to the destination database or if it can be ignored.

The following event definition indicates that table AppListsAppDevAssignments must be ignored during the database copy:

{
     "name" : "Ignore legacy tables.",
     "event" : {
        "name" : "MISSING_TABLE",
        "context" : {
           "table" : [ "AppListsAppDevAssignments" ]
        }
     },
     "action" : {
        "name" : "IGNORE"
     }
  }

The following event definition indicates that table DatabaseInformation must be created and copied:

{
     "name" : "Create DatabaseInformation table.",
     "event" : {
        "name" : "MISSING_TABLE",
        "context" : {
           "table" : "DatabaseInformation"
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

Table names can be defined as single string values, or as array of string values for the table key. All the collected tables must match at least one event. Events are processed in the order they are defined in the filters file. It is possible to use wildcards to group definitions:

{
     "name" : "Create inventory table.",
     "event" : {
        "name" : "MISSING_TABLE",
        "context" : {
           "table" : [
              "HwInv_ObjType_*",
              "SecInv_ObjType_*",
              "PowerManagement_ObjType_*",
              "SPMInv_ObjType_*",
              "CustInv_ObjType_*"
            ]
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

MISSING_COLUMN

Click here to see more details.

This event is triggered when a column from a source database table is not found in the corresponding table of the destination database. The purpose is to determine whether the column should be ignored or if it needs to be created and copied into the destination database. It’s important to note that columns can be detected in both static datasets (tables created by the default SQL script) and dynamic datasets (tables created dynamically). Both datasets must be checked, as columns may be dynamically created even in static tables.

The following event definition indicates some columns, if they exist in the source database, shall be ignored during the database copy:

{
     "name" : "Ignore Devices.LowerDeviceName and Devices.LowerDomainName columns.",
     "event" : {
        "name" : "MISSING_COLUMN",
        "context" : {
           "table" : "Devices",
           "column" : [ "LowerDeviceName", "LowerDomainName" ]
        }
     },
     "action" : {
        "name" : "IGNORE"
     }
  }

The following event definition indicates that some columns must be created and copied from the source database to the destination database. This definition includes grouped table names and column names. Therefore, any matching pair will work, such as HwInv_ObjType_6150.DeviceID or SPMInv_ObjType_3000.ATTR_300001.

{
     "name" : "Create inventory table columns.",
     "event" : {
        "name" : "MISSING_COLUMN",
        "context" : {
           "table" : [
              "HwInv_ObjType_*",
              "SecInv_ObjType_*",
              "PowerManagement_ObjType_*",
              "SPMInv_ObjType_*",
              "CustInv_ObjType_*"
            ],
           "column" : [
              "InventoryID:BIGINT",
              "DeviceID:INT",
              "ObjectTypeID:INT",
              "InstanceName",
              "Name",
              "SuppliedByDevice:SMALLINT",
              "CreateAdmin",
              "CreateTime",
              "ModifyAdmin",
              "ModifyTime",
              "ATTR_*"
            ]
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

When the command accepts a MISSING_TABLE or MISSING_COLUMN event, it will retrieve the real name from the filter event definition. The matching operation is case-insensitive, so the table name HwInv_ObjType_* will match HwInv_ObjType_3000, but also HWINV_OBJTYPE_3000 and hwinv_objtype_3000. In this case, the filter definition provides the real name, allowing the tables to be created with the correct values. This is necessary because dynamic tables in an Oracle source database are listed with uppercase names, while in PostgreSQL they are returned in lowercase. The names must be converted to ensure the creation statements use the correct object names. This rule applies to all objects created outside the default SQL script, including tables, columns, constraints, and indexes.

For column, the filter can also determine the data type precision. The following hints are available:

COLUMNNAME:BIGINT

The column must be created using big integer data type.

COLUMNNAME:INT

The column must be created using standard integer data type.

COLUMNNAME:SMALLINT

The column must be created using small integer data type.

COLUMNNAME:FLOAT

The column must be created using a floating number data type.

COLUMNNAME:DATE

The column must be created using a date data type.

COLUMNNAME:VARCHAR(PRECISION)

The column must be created using a variable text data type. The character count must be defined as PRECISION.

MISSING_PKEY

Click here to see more details.

This event is triggered when a primary key is collected from the source database but does not exist in the destination database. The goal is to determine whether this constraint should be ignored or created in the destination database.

The following event definition indicates some primary keys must be created, providing the correct constraint name:

{
     "name" : "Create inventory table primary key.",
     "event" : {
        "name" : "MISSING_PKEY",
        "context" : {
           "table" : [
              "HwInv_ObjType_*",
              "SecInv_ObjType_*",
              "PowerManagement_ObjType_*",
              "SPMInv_ObjType_*",
              "CustInv_ObjType_*"
            ],
           "pkey" : [ "HW*_PK", "SEC*_PK", "PWM*_PK", "SPM*_PK", "CST*_PK" ]
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

MISSING_FKEY

Click here to see more details.

This event is triggered when a foreign key is collected from the source database but does not exist in the destination database. The goal is to determine whether this constraint should be ignored or created in the destination database.

The following event definition indicates some foreign keys must be created, providing the correct constraint name:

{
     "name" : "Create inventory table foreign key.",
     "event" : {
        "name" : "MISSING_FKEY",
        "context" : {
           "table" : [
              "HwInv_ObjType_*",
              "SecInv_ObjType_*",
              "PowerManagement_ObjType_*",
              "SPMInv_ObjType_*",
              "CustInv_ObjType_*"
            ],
           "fkey" : [ "HW*_FK", "SEC*_FK", "PWM*_FK", "SPM*_FK", "CST*_FK" ]
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

MISSING_INDEXES

Click here to see more details.

This event is triggered when an index is collected from the source database but does not exist in the destination database. The goal is to determine whether this index should be ignored or created in the destination database.

The following event definition indicates some indexes must be created, providing the correct index name:

{
     "name" : "Create inventory table index.",
     "event" : {
        "name" : "MISSING_INDEX",
        "context" : {
           "table" : [
              "HwInv_ObjType_*",
              "SecInv_ObjType_*",
              "PowerManagement_ObjType_*",
              "SPMInv_ObjType_*",
              "CustInv_ObjType_*"
            ],
           "index" : [ "HW*_FK", "SEC*_FK", "PWM*_FK", "SPM*_FK", "CST*_FK" ]
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

COPY_TABLE

Click here to see more details.

This event is triggered before a table copy. The goal is to provide the option to ignore certain tables during the copy process. Even if a table is ignored using a MISSING_TABLE event definition, it must also be ignored with a COPY_TABLE event definition. Table creation and copying are two separate operations. Additionally, all collected tables must match at least one COPY_TABLE event definition, as there is no default behavior.

The following event definition indicates that all the tables must be copied by default:

{
     "name" : "Copy all tables.",
     "event" : {
        "name" : "COPY_TABLE",
        "context" : {
           "table" : "*"
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

COPY_COLUMN

Click here to see more details.

This event is triggered before a column copy. The goal is to provide the option to ignore certain columns during the copy process. The COPY_TABLE and COPY_COLUMN events allow for customization of the copy process at the table and column level. Note that all collected columns must match at least one COPY_COLUMN event definition, as there is no default behavior.

The following event definition indicates some columns must be ignored during the copy:

{
     "name" : "Ignore Devices.LowerDeviceName and Devices.LowerDomainName columns.",
     "event" : {
        "name" : "COPY_COLUMN",
        "context" : {
           "table" : "Devices",
           "column" : [ "LowerDeviceName", "LowerDomainName" ]
        }
     },
     "action" : {
        "name" : "IGNORE"
     }
  }

The following event definition indicates that all the columns must be copied by default. It should appear after the event definition above as the command will test the events in the order they are defined in the filters file:

{
     "name" : "Copy all columns.",
     "event" : {
        "name" : "COPY_COLUMN",
        "context" : {
           "table" : "*",
           "column" : "*"
        }
     },
     "action" : {
        "name" : "ACCEPT"
     }
  }

Example

Shown below are some of the command line executions:

From SQL Server to PostgreSQL

mtxdbt.exe copy database -db-type ODBCDirect -db-host my-sql-server -db-name my-db-name -db-cstr "Driver={ODBC Driver 18 for SQL Server};Encrypt=yes;TrustServerCertificate=yes" -db-user my-user -db-pass my-password -dst-db-type Postgres -dst-db-host my-postgresql-server -dst-db-port 5432 -dst-db-name my-db-name -dst-db-user my-user -dst-db-pass my-password -sql-files-path "my_master_installation_dir\data\Vision64Database\sql" -input-filters-path ./filters.json

From Oracle to SQL Server

mtxdbt.exe copy database -db-type Oracle -db-host my-oracle-server -db-port 1521 -db-name my-db-name -db-user my-user -db-pass my-password -dst-db-type ODBCDirect -dst-db-host my-sql-server -dst-db-name my-db-name -dst-db-cstr "Driver={ODBC Driver 18 for SQL Server};Encrypt=yes;TrustServerCertificate=yes" -dst-db-user my-user -dst-db-pass my-password -sql-files-path "my_master_installation_dir \master\data\Vision64Database\sql" -input-filters-path ./filters.json -input-tablespaces-path ./tablespaces.json -debug


For more information, refer to the detailed documentation on command line parameters. Note that depending on the operating system running the command, not all database drivers may be available.:

  • Windows: Supports SQL Server, Oracle and PostgreSQL
  • Linux: Supports Oracle and PostgreSQL
  • macOS: Supports PostgreSQL


To update database values

The update values command is designed to help customers update predefined column rows in their BMC Client Management database, specifically those containing file paths. While the copy database command helps customers migrate to a different database system, it can also be used to change the operating system running the BMC Client Management Master. In this case, file paths stored in the database may need to be updated. Additionally, it’s possible to move the Master from folder A to folder B without changing the underlying operating system. In this scenario, the file paths stored in the database must also be updated to reflect the change.

Important

This command simplifies updating rows with file paths, it does not copy any files from the source system to the destination if the file paths are changed.

The command connects to a database and collects a list of tables and columns. Modifications are then applied based on the provided filter settings. The command includes a default filter, but it is also possible to export this default configuration and import it back after making modifications.

Command

Description

UPDATE_COLUMN

Click here to see more details.

This event is triggered when a column is processed. If a match is found, its embedded columns are updated.

The following event definition indicates that some columns must be updated:

{
      "name" : "Update Vision64Database tables.",
      "event" : {
         "name" : "UPDATE_COLUMN",
         "context" : {
            "database" : "${db-name}",
            "table" : {
               "SWInventFilter" : [ "InventPath" ],
               "ReportExports" : [ "ExportPath" ],
               "ReportResults" : [ "ResultPath" ],
               "FileStoreQueue" : [ "FilePath" ],
               "ExternalIntegrations" : [ "URL" ],
               "HWInventFilter" : [ "InventPath" ],
               "DiagnosticLogFiles" : [ "FilePath" ],
               "IMRecords" : [ "RecordPath" ]
            }
         }
      },
      "action" : {
         "name" : "UPDATE_PREFIX"
      },
      "params" : {
         "old_prefix" : "${old-prefix}",
         "new_prefix" : "${new-prefix}"
      }
   }

Currently, UPDATE_PREFIX is the only supported action. It updates row values by replacing any occurrence of old_prefix with new_prefix. Both old_prefix and new_prefix are supplied as parameters on the command line, though their values can also be provided directly in the JSON file. This event definition applies to the Master’s database, so the database name is typically obtained from the command line parameters, but it can also be provided directly.

The following event definition applies to an SQLITE database, as this command can also be used to update such databases:

{
      "name" : "Update OSDeployment tables.",
      "event" : {
         "name" : "UPDATE_COLUMN",
         "context" : {
            "database" : "OSDeployment.sqlite3",
            "table" : {
               "Images" : [ "Img_DeployCab" ],
               "Drivers" : [ "Drv_Path" ],
               "Machines" : [ "M_PostInstallScript" ],
               "Environment" : [ "Env_TftpRoot", "Env_DriversRoot" ]
            }
         }
      },
      "action" : {
         "name" : "UPDATE_PREFIX"
      },
      "params" : {
         "old_prefix" : "${old-prefix}",
         "new_prefix" : "${new-prefix}"
      }
   }

For more information, refer to the detailed documentation on command-line parameters. The availability of database drivers may vary depending on the operating system running the command:

  • Windows: Supports SQL Server, Oracle, and PostgreSQL
  • Linux: Supports Oracle and PostgreSQL
  • macOS: Supports PostgreSQL


Command help output parameters

Parameters

Command

Description

copy database

This command copies a source database content into a destination database. The source and destination databases can be SQL Server, Oracle or PostgreSQL. The command prepares the destination database structure to ensure that all the rows can be copied for each table. Upon execution, the destination database includes all the source database content.

update values

This command updates the text columns for rows with values that match a specified prefix, replacing them with modified values based on a new prefix. It is commonly used to change the prefix of identified columns, such as file paths

-log STRING

This command defines the log file path. When configured, the command writes the log on the terminal and in the specified log file. When not configured, it will only write the log on the terminal

-debug

This command enables the debug log messages. This mode also generates messages with an extended format, including the date and time for each record and an identification class for each logged line.

-shorten-values

This command copies each text column using a predefined buffer size.

For example, If the text column's maximum length is 2000 characters or less, that length is used as the buffer size. If it's greater than 2000 characters, the limit is set to 2000 to avoid high memory usage. The command will fail if a source row requires more characters. Use this parameter to shorten the source value and proceed.

-transaction-size INTEGER

This command defines how many rows per transaction can be inserted while populating destination tables. Caution because depending on the defined parameter value, the command may be slow or use a lot of memory. Default size is 10.000 rows per transaction.

-sql-files-path STRING [mandatory]

This command defines the SQL scripts path. These scripts are required to create the default objects in the dst database. Depending on the database type, a different set of SQL scripts will be used.

-output-filters-path STRING

This command writes the default event filters to the defined file path. The content can then be updated and provided as input-filters-path to a next call.

-input-filters-path STRING

This command reads the event filters from the provided file path. The default event filters can be obtained using -output-filters-path.

-output-tablespaces-path STRING

This command writes the default tablespace mappings to the defined file path. These mappings define the list of custom tablespaces if the default BCM ones shall not be used, and the list of renaming rules to be applied if the source tablespaces name shall be updated.

-input-tablespaces-path STRING

This command reads the default tablespace mappings from the provided file path. The default mappings can be obtained using -output-tablespaces-path.

Source Database Parameters

-db-type STRING [mandatory]

This command defines the type for the src database. Accepted strings are odbcdirect or odbc for SQL Server, postgres for PostgreSQL, oracle for Oracle.

-db-host STRING

This command defines the hostname or IP address for the src database. This value is optional for SQL Server and mandatory otherwise. When set with a valid SQL Server connection string, the whole parameters are merged to build the final connection string.

-db-port INTEGER

This command defines the TCP port number to connect to the src database. This value is required for PostgreSQL only.

-db-name STRING [mandatory]

This command defines the name for the src database.

-db-user STRING [mandatory]

This command defines the user to connect to the src database.

-db-pass STRING [mandatory]

This command defines the password to connect to the src database. Cautions as values supplied on the command line.

-db-cstr STRING

This command defines a connection string to connect to the src database. This value can be merged with other database command line parameters to build the final connection string on SQL Server. Use the connection string value to configure specific settings, such as driver and encryption.

Destination Database Parameters

-dst-db-type STRING [mandatory]

This command defines the type for the dst database. Accepted strings are odbcdirect or odbc for SQL Server, postgres for PostgreSQL, oracle for Oracle.

-dst-db-host STRING

This command defines the hostname or IP address for the dst database. This value is optional for SQL Server and mandatory otherwise. When set with a valid SQL Server connection string, the whole parameters are merged to build the final connection string.

-dst-db-port INTEGER

This command defines the TCP port number to connect to the dst database. This value is required for PostgreSQL only.

-dst-db-name STRING [mandatory]

This command defines the name for the dst database.

-dst-db-user STRING [mandatory]

This command defines the user to connect to the dst database

-dst-db-pass STRING [mandatory]

This command defines the password to connect to the dst database. Caution as values supplied on the command line are visible when enumerating system tasks via dedicated tools.

-dst-db-cstr STRING

This command defines a connection string to connect to the dst database. This value can be merged with other database command line parameters to build the final connection string on SQL Server. Use the connection string value to configure specific settings, such as driver and encryption.

Operation Parameters

-clear

This command authorizes the command to perform clear operations on the dst database such as tables, functions or primary key constraints deletion. This is only required when the dst database must be emptied before the scripts can be executed.

-init

This command authorizes the command to initialize the dst database with the scripts dedicated to objects creation. This initialization operation cannot be executed unless the dst database was first cleared, if required.

-update

This command authorizes the command to update the dst database with dynamic objects identified in the src database. Dynamic objects are elements collected from the src database but not defined in the SQL scripts. The database can only be updated after a successful initialization.

-populate

This command authorizes the command to copy the src database rows in the dst tables when required. Only tables having a different row count will be copied from a src table into its equivalent dst table. The copy operation can only be executed if the src and dst tables include compatible columns, which is verified by the update operation.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*