Generic - CSV columnar file parser


To collect data by using the Generic - CSV Columnar File Parser ETL, do the following tasks:

I. Complete the preconfiguration tasks

II. Configure the ETL

Step I. Complete the preconfiguration tasks

Ensure that you complete the tasks that are mentioned in Preparing-to-integrate-CSV-files-with-general-purpose-connectors.

Step II. Configure the ETL

You must configure the ETL to connect to the file parser for data collection. ETL configuration includes specifying the basic and optional advanced properties. While configuring the basic properties is sufficient, you can optionally configure the advanced properties for additional customization.

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values, if required.

To configure the basic properties with the CSV columnar file parser:

  1. In the TrueSight Capacity Optimization console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties.
  3. On the Run Configuration tab, select Generic - CSV Columnar File Parser from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.
    generic_columnar_csv_file_parser_add_etl_page.png
  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog: Select if the other ETLs access the same entities that are used by this ETL.
      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if you want to use this ETL independently.
  5. Click the CSV parser tab, and configure the following properties:

  6. Click the File location tab. Depending on the file location, select one of the following methods to retrieve the CSV file and configure the properties.

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)\$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Select one of the following options to be performed on the imported CSV file:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note

    You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the FileSystem Cleaner task.

    Parsed files suffix

    The suffix that will be appended to the parsed files; default is done.

    Property

    Description

    Network Share Path

    Path of the shared folder. For example, //hostname/sharedfolder.

    Subdirectory

    (Optional) Specify a subdirectory within a mount point.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)\$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note

    You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the FileSystem Cleaner task.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)\$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note

    You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the FileSystem Cleaner task

    Remote host

    Enter the host name or IP address of the remote host to connect to.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    Files to copy (with wildcards)

    Specify the files that you want to copy to the database.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)\$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note

    You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the FileSystem Cleaner task.

    Remote host

    Enter the name or IP address of the remote host to connect to.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)\$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note

    You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the FileSystem Cleaner task.

    Remote host

    Enter the host name or IP address of the remote host to connect to.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    The following image shows the sample configuration values for the basic properties.

    generic_columnar_csv_file_parser_config_basic.png

  7. (Optional) Override the default values of properties in the following tabs:

    Run configuration
    Property
    Description
    Module selection
    Select one of the following options:
    • Based on datasource: This is the default selection.
    • Based on Open ETL template: Select only if you want to collect data that is not supported by TrueSight Capacity Optimization.
    Module description
    A short description of the ETL module.
    Execute in simulation mode
    By default, the ETL execution in simulation mode is selected to validate connectivity with the data source, and to ensure that the ETL does not have any configuration issues. In the simulation mode, the ETL does not load data into the database. This option is useful when you want to test a new ETL task. To run the ETL in the production mode, select No.
    BMC recommends that you run the ETL in the simulation mode after ETL configuration and then run it in the production mode.
    Datasets
    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.
    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    1. Click Apply.
    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list.
    Object relationships
    Property
    Description
    After import
    Specify the domain where you want to add the entities created by the ETL. You can select an existing domain or
    create a new one.Select one of the following options:
    • leave all new entities in 'Newly Discovered': This option is selected by default.
    • move all new entities in a new Domain.
      • New domain: Select a parent domain, and specify a name for your new domain.
    • move all new entities in an existing Domain: Select an existing domain from the Domain list. If the selected domain is already used by other hierarchy rules, select one of the following Domain conflict options:
      • Enrich domain tree: Select to create a new independent hierarchy rule for adding a new set of entities, relations, or both that are not defined by other ETLs.
      • ETL Migration: Select if the new ETL uses the same set of entities, relations, or both that are already defined by other ETLs.
    ETL task properties
    Property
    Description
    Task group
    Select a task group to classify the ETL.
    Running on scheduler
    Select one of the following schedulers for running the ETL:
    • Primary Scheduler: Runs on the Application Server.
    • Generic Scheduler: Runs on a separate computer.
    • Remote: Runs on remote computers.
    Maximum execution time before warning
    Indicates the number of hours, minutes, or days for which the ETL must run before generating warnings or alerts, if any.
    Frequency
    Select one of the following frequencies to run the ETL:
    • Predefined: This is the default selection. Select a daily, weekly, or monthly frequency, and then select a time to start the ETL run accordingly.
    • Custom: Specify a custom frequency, select an appropriate unit of time, and then specify a day and a time to start the ETL run.

  8. Click Save.
    The ETL tasks page shows the details of the newly configured Generic - CSV columnar file parser.


(Optional) B. Configuring the advanced properties

You can configure the advanced properties to change the way the ETL works.

To configure the advanced properties:

  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

    Run configuration
    Property
    Description
    Run configuration name
    Specify the name that you want to assign to this ETL task configuration. The default configuration name is displayed. You can use this name to differentiate between the run configuration settings of ETL tasks.
    Deploy status
    Select the deploy status for the ETL task. For example, you can initially select Test and change it to Production after verifying that the ETL run results are as expected.
    Log level
    Specify the level of details that you want to include in the ETL log file. Select one of the following options:
    • 1 - Light: Select to add the bare minimum activity logs to the log file.
    • 5 - Medium: Select to add the medium-detailed activity logs to the log file.
    • 10 - Verbose: Select to add detailed activity logs to the log file.
    Use log level 5 as a general practice. You can select log level 10 for debugging and troubleshooting purposes.
    Datasets
    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.
    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    1. Click Apply.
    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list.
    Default locale
    Specify the default locale information.
    Collection level
    Property
    Description
    Metric profile selection
    Select the metric profile that the ETL must use. The ETL collects data for the group of metrics that is defined by the selected metric profile.
    • Use Global metric profile: This is selected by default. All the out-of-the-box ETLs use this profile.
    • Select a custom metric profile: Select the custom profile that you want to use from the Custom metric profile list. This list displays all the custom profiles that you have created.
    For more information about metric profiles, see Adding-and-managing-metric-profiles.
    Levels up to
    Specify the metric level that defines the number of metrics that can be imported into the database. The load on the database increases or decreases depending on the selected metric level.To learn more about metric levels, see Aging Class mapping.
    Additional properties
    Property
    Description
    List of properties
    Specify additional properties for the ETL that act as user inputs during run. You can specify these values now or you can do so later by accessing the "You can manually edit ETL properties from this page" link that is displayed for the ETL in the view mode.
    1. Click Add.
    2. In the etl.additional.prop.n field, specify an additional property.
    3. Click Apply.
      Repeat this task to add more properties.
    Loader configuration
    Property
    Description
    Empty dataset behavior
    Specify the action for the loader if it encounters an empty dataset:
    • Warn: Generate a warning about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.
    ETL log file name
    The name of the file that contains the ETL run log. The default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID
    Maximum number of rows for CSV output
    A numeric value to limit the size of the output files.
    CSV loader output file name
    The name of the file that is generated by the CSV loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Capacity Optimization loader output file name
    The name of the file that is generated by the TrueSight Capacity Optimization loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Detail mode
    Specify whether you want to collect raw data in addition to the standard data. Select one of the following options:
    • Standard: Data will be stored in the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw also: Data will be stored in the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw only: Data will be stored in the database in a table only at Raw granularity (as available from the original data source).
    For more information, see Accessing-data-using-public-views and Sizing-and-scalability-considerations.
    Remove domain suffix from datasource name (Only for systems) 
    Select True to remove the domain from the data source name. For example, server.domain.com will be saved as server. The default selection is False.
    Leave domain suffix to system name (Only for systems)
    Select True to keep the domain in the system name. For example: server.domain.com will be saved as is. The default selection is False.
    Update grouping object definition (Only for systems)
    Select True if you want the ETL to update the grouping object definition for a metric that is loaded by the ETL. The default selection is False.
    Skip entity creation (Only for ETL tasks sharing lookup with other tasks)
    Select True if you do not want this ETL to create an entity and discard data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share a lookup to create a new entity. The default selection is False.
    Scheduling options
    Property
    Description
    Hour mask
    Specify a value to run the task only during particular hours within a day. For example, 0 – 23 or 1, 3, 5 – 12.
    Day of week mask
    Select the days so that the task can be run only on the selected days of the week. To avoid setting this filter, do not select any option for this field.
    Day of month mask
    Specify a value to run the task only on the selected days of a month. For example, 5, 9, 18, 27 – 31.
    Apply mask validation
    Select False to temporarily turn off the mask validation without removing any values. The default selection is True.
    Execute after time
    Specify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task is run. The task run begins only after the specified time is elapsed.
    Enqueueable
    Specify whether you want to ignore the next run command or run it after the current task. Select one of the following options:
    • False: Ignores the next run command when a particular task is already running. This is the default selection.
    • True: Starts the next run command immediately after the current running task is completed.
  3. Click Save.
    The ETL tasks page shows the details of the newly configured CSV columnar file parser ETL.

Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. In the TrueSight Capacity Optimization console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.
  3. In the Run configurations table, click Edit edit icon.png to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click edit icon.png in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit edit icon.png to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the TrueSight Capacity Optimization database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit. The ETL details are displayed.
  2. On the Edit task page, do the following, and click Save:
    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group and the scheduler to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the TrueSight Capacity Optimization database.

Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the CSV file data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the TrueSight Capacity Optimization console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.

To verify that the CSV data is refreshed:

  1. In the TrueSight Capacity Optimization console, click Workspace.
  2. Expand CSV > Systems.
  3. In the left pane, verify that the hierarchy displays the CSV instances.
  4. Click an instance, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.

The following image shows sample metrics data.

csv_generic_collected_data.png

Related topics

Using-ETL-datasets

Generic-CSV-file-parser

Developing-custom-ETLs

Dataset-reference-for-ETL-tasks

Horizontal and Vertical datasets

Viewing-datasets-and-metrics-by-dataset-and-ETL-module

Importing data from custom sources through data formats

Understanding-entity-identification-and-lookup

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*