Adding and managing data movers

A Data Mover is a task responsible for moving data from a source that - in general - does not comply with BMC Helix Capacity Optimization standard Importing data from custom sources through data formats. Moved data are not collected into BMC Helix Capacity Optimization Data Warehouse, but saved on the filesystem in CSV or XML format.

For example, by using a Data Mover you can extract any type of data from a database and:

  • Move it to files stored in the /repository folder.
  • Move it to external locations (data feeds), for example via FTP or using Windows shares.

Examples of data collected by a Data Mover include:

  • Results from a custom query, whose number of columns might vary with time.
  • Textual information, such as system diagnostics.

You can add a new Data Mover by navigating to Helix Capacity Optimization Console> Administration > ETL & SYSTEM TASKS and then selecting the Add data mover option on the Add drop-down menu found at top of the ETL tasks page. You can then configure it by manually adding properties.

Overview of Query Data Mover

A Query Data Mover is a type of Data Mover task used to:

  • Open a connection to a specified database.
  • Launch one or more queries.
  • Save the results into XML files (one file for each query).

To configure the database connection

By default, a connection to BMC Helix Capacity Optimization internal database is opened, because the primary purpose of a Data Mover is to automatically export data from BMC Helix Capacity Optimization to an external location.

To connect to other data sources, modify the following properties:

Additional information

If the database you are connecting to is not Oracle, the JDBC driver .jar archive file must be copied in the /lib directory of the scheduler installation directory. Also, the format of the datasource.url parameter heavily depends on the JDBC driver. For more information, refer to the driver's official documentation.

To configure queries

Each query is defined by a set of three properties:

  • <queryname>.query: SQL code of the query.
  • <queryname>.description: Description of the query.
  • <queryname>.filename: Name of the file in which results will be saved.

All <querynames> must be specified as a semicolon-separated list in the property.

All files will be created in the base directory specified by the property. File names can contain macros, for example, %{DATE(yyyy-MM-dd)}.

Configuration example:

test_check_01.description=TEST - query 1
test_check_01.query=SELECT trunc(sysdate) as ts FROM dual
test_check_02.description=TEST - query 2
test_check_02.query=SELECT count(*) as task_number from task

The format.type property can be set to either xml or csv

To send output files to external locations

After the execution of a Data Mover, you can choose to upload the resulting files to external locations via FTP, CIFS (Windows shares), SCP, SFTP or WEB_DAV. Configuring all protocols will cause the upload to be executed multiple times.

 The following code is an example of how you can configure the upload using different services:

post.1.action.type = UPLOAD
post.1.uploader.number.files = 2
post.1.uploader.file.1.location = /opt/cpit/repository/test.xml
post.1.uploader.file.2.location = /opt/cpit/repository/file.dbd

post.1.uploader.ftp.username = user
post.1.uploader.ftp.password = pass = host
post.1.uploader.ftp.remote.folder = [optional] NEP_SHARE/

post.1.uploader.cifs.username = user
post.1.uploader.cifs.password = pass
post.1.uploader.cifs.workgroup = WORKGROUP
post.1.uploader.cifs.remote.folder = storage/ = nepsrv22

post.1.uploader.scp.username = user
post.1.uploader.scp.password = pass
post.1.uploader.scp.remote.folder = /root/TEST =

post.1.uploader.scp.username = user
post.1.uploader.scp.password = pass
post.1.uploader.scp.remote.folder = webdav/ =

post.1.uploader.sftp.username = user
post.1.uploader.sftp.password = pass
post.1.uploader.sftp.remote.folder = /home/cpit =

Was this page helpful? Yes No Submitting... Thank you