Migrating PostgreSQL data
If you are using TrueSight Capacity Optimization with PostgreSQL database, use the instructions in this section to migrate your data from TrueSight Capacity Optimization to BMC Helix Capacity Optimization. BMC provides an out-of-the-box migration toolkit that enables you to migrate your data from TrueSight Capacity Optimization. This toolkit includes scripts to perform database dump and restore, CLI command line examples, and custom ETL project to perform data gap recovery.
Before you start, review the key considerations. For details, see Migrating-from-TrueSight-Capacity-Optimization.
Prerequisites
Ensure that the following requirements are met:
- You have administrator privileges for accessing TrueSight Capacity Optimization.
- TrueSight Capacity Optimization is upgraded to version 11.3.01 or later. To migrate data from a version earlier than 11.3.01, you must first upgrade to 11.3.01 and then migrate the data to BMC Helix Capacity Optimization.
- TrueSight Capacity Optimization is using PostgreSQL 9.4 or later.
- You have downloaded the PostgreSQL migration toolkit from the EPD website.
Click the TrueSight Capacity Optimization 20.02 EPD link.To access the EPD website, you must provide your BMC Support credentials. You must be licensed for the product to access the download page. You may be prompted to complete the Export Compliance Form.
- On the Products tab, download the Helix Optimize Migration Toolkit file.
Note: Use the same toolkit when you are migrating from TrueSight Capacity Optimization version 11.3.01 or 11.5. - Create a directory and copy the downloaded file to the directory.
To migrate PostgreSQL data
1. Create TrueSight Capacity Optimization database dump
- Stop all TrueSight Capacity Optimization services and ETL data collection. For details on stopping the services, see System-level administration overview.
- Based on the database and repository size, ensure that the sufficient space is available on the dump location.
- Create a database dump.
- Copy the migration toolkit to a directory on the TrueSight Capacity Optimization Database Server that runs the PostgreSQL database.
- Log in to the Database Server.
- Ensure that the database utilities are present inside the user PATH variable.
- If database binaries are installed in the /usr/pgsql-11/bin/ folder, run the following command:
export PATH=/usr/pgsql-11/bin/:$PATH - Verify the database version:
pg_dump --version
- If database binaries are installed in the /usr/pgsql-11/bin/ folder, run the following command:
- Extract the contents of the toolkit file.
- At the shell prompt, change to the directory where you extracted the toolkit files and navigate to the database-specific folder. For example, cd database/PostgreSQL.
Run the following command:
bash dump/db_dump.sh -h <database FQDN hostname> -u <CO DB user name. For example, BCO_OWN> -p <CO DB user password> -s <DB name. For example, BCO_OWN>
-P <DB PORT - optional - default: 5432> -l <path to store dump>The db_dump.sh utility displays the following message with the parameters. Note down these parameters as they are used to configure the Data Gap Recovery ETL. For details, see Configuring the Data Gap Recovery ETL.
You can also see this message in the db_dump.log file that is generated in the same folder.
- Create a dump for repository content and encryption keys.
- Copy the migration toolkit archive to the host computer on which the Application Server is running and log in to it.
- At the shell prompt, change to the directory where you extracted the toolkit files and navigate to the repository folder.
Run the following command:
bash repository/repository_and_keys_backup.sh -c <app server installation directory> -r <CO repository directory> -l <path to store dump>When you run the db_dump.sh and repository_and_keys_backup.sh commands, the following dump files are generated at the specified path:
- db_dump.tar.gz
- repository_backup.tar.gz
- certificates_backup.tar.gz
- db_dump.tar.gz
- Restart all TrueSight Capacity Optimization services and ETL data collection.
Note: After you start with the migration activity, avoid creating any new artifacts. Failing to do so, after the migration, those artifacts will be lost in BMC Helix Capacity Optimization.
2. Create TrueSight Capacity Optimization historical database dump
- Log in to the Database Server.
- Ensure that the database utilities are present inside the user PATH variable.
- If database binaries are installed in the /usr/pgsql-11/bin/ folder, run the following command:
export PATH=/usr/pgsql-11/bin/:$PATH - Verify the database version:
pg_dump --version
- If database binaries are installed in the /usr/pgsql-11/bin/ folder, run the following command:
- Create a historical database dump.
- Extract the contents of the toolkit file.
- At the shell prompt, change to the directory where you extracted the toolkit files and navigate to the database-specific folder. For example, cd database/PostgreSQL.
Run the following command:
bash dump/history_db_dump.sh -h <db host> -u <CO DB user name. For example, BCO_OWN> -p <BCO_OWN pwd> -s <DB name. For example, BCO_OWN> -l <path to store historical dump> -P <DB PORT - optional> -a <max aging in days - optional>The history_db_dump.sh command generates the history_db_dump.tar.gz file at the specified path.
3. Share the database dump with the BMC team
To request a database restore in BMC Helix Capacity Optimization, submit a request from your support portal. For details, see Database backups and restores.
Transfer the following dump files to the BMC team:
- db_dump.tar.gz
- repository_backup.tar.gz
- certificates_backup.tar.gz
- history_db_dump.tar.gz
4. Restore the database and perform activation tasks for BMC Helix Capacity Optimization
The BMC team will perform the database restore and activation process.
After you get the confirmation from the BMC team, proceed to the next step.
5. Configure users and roles
Some default user groups and roles are available out-of-the-box when you subscribe to BMC Helix Capacity Optimization . You can start using the product with these users. Additionally, you can configure your identity provider to set up more users and user groups, and assign specific permissions to them. BMC Helix Capacity Optimization leverages Helix Single Sign-On to provide single sign-on authentication for users. For details, see Managing users and access control.
6. Download and install Remote ETL Engines
Download and install the Remote ETL Engine and related components that are needed to collect data from your environment. For details, see Installing-remote-components-to-collect-on-premises-data.
7. Validate data in BMC Helix Capacity Optimization
After you get the confirmation from the BMC team about successful data restore, validate the restored data. Check all ETLs, data marts, and views customization and perform a sanity check of the environment.
(Optional) 8. Recover the delta data
Delta data can be any data changes that occur in the TrueSight Capacity Optimization environment during the data migration and BMC Helix Capacity Optimization activation period. If required, you can recover this data gap between the TrueSight Capacity Optimization and BMC Helix Capacity Optimization instances. You can take one of the following actions:
- Ignore the delta data if you do not need it
- Recover delta data for data sources that support historical recovery.
- For this, ensure that the last counters property is set in the ETLs. When you rerun such ETLs on the BMC Helix Remote ETL Engines, they start to collect the data from the last data import before the dump was taken, ensuring that data gap is recovered.
- Your data sources must have the sufficient data retention for this approach to work.
- Fully recover delta data using the Data Gap Recovery ETL provided with the migration toolkit.
- This custom ETL is written in the Java source code and provided as an EDK project in the migration toolkit. It is located in the <migration_toolkit_archive>/etl/dataGapRecoveryETL folder. If required, you can customize the code.
Using this ETL, you can:- Extract all time series samples from the TrueSight Capacity Optimization database at an hourly resolution for all systems/business drivers till the value of Series per batch that you configure in the ETL.
- Extract samples in batches, so that the TrueSight Capacity Optimization database is not overloaded.
- Limit data extraction per each run, providing a way to control data import in BMC Helix Capacity Optimization.
Activate and configure the Data Gap Recovery ETL:
- Activate this ETL on the Remote ETL Engine in BMC Helix Capacity Optimization using the ETL Development kit. For details, see Creating-and-managing-custom-ETLs.
After activation, this ETL module is displayed in the list of ETL modules that are available to the administrators when creating a new ETL instance. - Configure the ETL:
- Log in to the BMC Helix Capacity Optimization console.
- Navigate to Administration > ETL & System Tasks, and select ETL tasks.
- On the ETL tasks page, click Add > Add ETL.
- On the Run configuration tab, select the Data Gap Recovery ETL module that is displayed in the ETL Module list.
- On the Entity catalog tab, select Private Entity Catalog.
- On the Custom properties tab, specify the parameters that were generated after you run the db_dump.sh utility in Step 2. Also specify Series per batch and Max samples per run.
- On the Connection parameters tab, specify the TrueSight Capacity Optimization database details.
- Click Save. The ETL tasks page shows the details of the newly configured ETL.
- Click the ETL and click Run active configuration. A confirmation message about the ETL run job submission is displayed.
Note: BMC Helix Capacity Optimization is configured with an hard data import limit per day. Data recovery has to be properly scheduled to avoid reaching such limit or limit has to be reconfigured based on needs.
- Activate this ETL on the Remote ETL Engine in BMC Helix Capacity Optimization using the ETL Development kit. For details, see Creating-and-managing-custom-ETLs.
- This custom ETL is written in the Java source code and provided as an EDK project in the migration toolkit. It is located in the <migration_toolkit_archive>/etl/dataGapRecoveryETL folder. If required, you can customize the code.