Generic - Events CSV parser


Use the Generic - Events CSV parser to import events by creating a CSV file. With this ETL, you can import the events without the need of writing custom code.

Collecting data by using the Generic - Events CSV parser ETL

To collect data by using the Generic - Events CSV parser ETL, do the following tasks:

Step I. Configure the ETL

You must configure the ETL to connect to the file parser for data collection. ETL configuration includes specifying the basic and optional advanced properties. While configuring the basic properties is sufficient, you can optionally configure the advanced properties for additional customization.

To configure the basic properties of the ETL

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. Navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and File location.
  3. On the Run Configuration tab, select Generic - Events CSV Parser from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.
    generic_events_csv_add_etl.png
  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog: Select if the other ETLs access the same entities that are used by this ETL.
      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if you want to use this ETL independently.
  5. Click the File location tab. Depending on the file location, select one of the following methods to retrieve the CSV file and configure the properties.

    Local directory

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Select one of the following options to be performed on the imported CSV file:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note: You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring-the-FileSystem-Cleaner-task.

    Parsed files suffix

    The suffix that will be appended parsed files; default is .done.

    Windows share

    Property

    Description

    Network Share Path

    Path of the shared folder. For example, //hostname/sharedfolder.

    Subdirectory

    (Optional) Specify a subdirectory within a mount point.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note: You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For details, see Configuring-the-FileSystem-Cleaner-task.

    Parsed files suffix

    The suffix that will be appended parsed files; default is .done.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    FTP

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note: You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the filesystem cleaner task.

    Parsed files suffix

    The suffix that will be appended parsed files; default is .done.

    Remote host

    Enter the host name or IP address of the remote host to connect to.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    SCP

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    Files to copy (with wildcards)

    Specify the files that you want to copy to the database.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note: You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the filesystem cleaner task.

    Parsed files suffix

    The suffix that will be appended parsed files; default is .done.

    Remote host

    Enter the name or IP address of the remote host to connect to.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

    SFTP 

    Property

    Description

    Directory

    Path of the directory that contains the CSV file.

    File list pattern

    A regular expression that defines which data files the ETL must read. The default value is (?$<$!done)$, which indicates that the ETL must read all the files except the files whose name ends with the string "done". For example, my_file_source.done.

    Recurse into subdirs?

    Select Yes to inspect the subdirectories of the target directories.

    After parse operation

    Depending on what to do after the CSV file is imported, select one of the following options:

    • Do nothing: Do nothing after import.
    • Append suffix to parsed file: Append a specified suffix to the imported CSV file. For example, _done or _imported.
    • Archive parsed file in directory: Archive the parsed file in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived parsed files.
    • Archive bad files in directory: Archive erroneous files in the specified directory. The default directory is %BASE/../repository/imprepository. Also, specify whether you want to compress archived bad files.

    Note: You can configure automatic cleaning of parsed files using the Filesystem cleaner task. For more information, see Configuring the filesystem cleaner task.

    Parsed files suffix

    The suffix that will be appended parsed files; default is .done.

    Remote host

    Enter the host name or IP address of the remote host to connect to.

    Username

    Enter the user name to connect to the file location server.

    Password required

    Select Yes or No.

    Password

    Enter the password to connect to the file location server. Applicable if you selected Yes for Password required.

  6. (Optional) Override the default values of properties in the following tabs:

    Run configuration
    Object relationships
    ETL task properties
  7. Click Save.
    The ETL tasks page shows the details of the newly configured Generic - Event CSV parser ETL.

To configure the advanced properties of the ETL (optional)

You can configure the advanced properties to change the way the ETL works.

To configure the advanced properties:

  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

    Run configuration

    Property

    Description

    Run configuration name

    Specify the name that you want to assign to this ETL task configuration. The default configuration name is displayed. You can use this name to differentiate between the run configuration settings of ETL tasks.

    Deploy status

    Select the deploy status for the ETL task. For example, you can initially select Test and change it to Production after verifying that the ETL run results are as expected.

    Log level

    Specify the level of details that you want to include in the ETL log file. Select one of the following options:

    • 1 - Light: Select to add the bare minimum activity logs to the log file.
    • 5 - Medium: Select to add the medium-detailed activity logs to the log file.
    • 10 - Verbose: Select to add detailed activity logs to the log file.

    Use log level 5 as a general practice. You can select log level 10 for debugging and troubleshooting purposes.

    Datasets

    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    1. Click Apply.

    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list. The EVDAT - Event data dataset is selected by default. 

    File location

    Click the File location tab. Depending on the file location, select one of the following methods to retrieve the CSV file and configure the properties.

    Property

    Description

    Subdirectories to exclude (separated by ;) (Local directory)

    Names of subdirectories to exclude from parsing.

    Input file external validator (Local directory, Windows share, FTP)

    Select any one of the following options:

    • No external validation: Do not use external validation of the CSV file structure.
    • Use external validation script: Use the following script to validate the CSV file:
      • Script to execute: Specify the validation script to use to validate the input file.
    Additional properties

    Property

    Description

    List of properties

    Specify additional properties for the ETL that act as user inputs during run. You can specify these values now or you can do so later by accessing the "You can manually edit ETL properties from this page" link that is displayed for the ETL in the view mode.

    1. Click Add.
    2. In the etl.additional.prop.n field, specify an additional property.
    3. Click Apply.
      Repeat this task to add more properties.

    The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.
    The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.

  3. Click Save.
    The ETL tasks page shows the details of the newly configured Generic - Events CSV parser ETL.

Step II. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. To run the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. Navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.
    etl_details.png

  3. In the Run configurations table, click Edit edit_this_run_configuration.png to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, clickclick to view details.pngin the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. To run the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

To run the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit edit_this_run_configuration.png to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL runs, it collects data from the source and transfers it to the BMC Helix Continuous Optimization database.

To schedule the ETL run in the production mode

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit task. The ETL details are displayed.
    aws_api_etl_schedule_run.png
  2. On the Edit task page, do the following, and click Save:
    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the BMC Helix Continuous Optimization database.

Step III. Verify data collection

Verify that the ETL ran successfully and check whether the CSV file data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. Click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.

To view the events:

  1. Access the required domain by navigating to Workspace > Domains, Services & Applications.
  2. In the details page of the domain, that appears on the right, click Navigate to > Events

CSV file format

The following examples specify the format to be used for the CSV file. 

Note

Columns in the CSV have the same names as of the EVDAT columns. All the columns defined in the EVDAT dataset are supported. For information about the EVDAT dataset, see Dataset-reference-for-ETL-tasks.

EVENT_CSV_EXAMPLE_1
EVENTTS;NAME;DS_EVENTID;DS_SYSNM;DESCRIPTION;NOTE;ENTNM;ENTTYPENM
2022-04-12 00:00:00;"Network outage";server.yournet.com;"c3eac109-29e3-45b6-a063-0bdcac28ea53";"Network NIC failure";;server.yournet.com;gm:vmw
2022-05-12 00:00:00;"Disk outage";server2.yournet.com;"17385cae-c65f-4d0e-bf7e-d7d7c19813ef";"Disk controller failure";"Any note";server2.yournet.com;gm:vmw
EVENT_CSV_EXAMPLE_ML
EVENTTS;NAME;DS_EVENTID;DS_SYSNM;DESCRIPTION;NOTE;ENTNM;ENTTYPENM;STRONGLOOKUPFIELDS
2022-04-12 00:00:00 +0000;"CPU Upgrade";HOSTNAME#server.yournet.com;"c3eac109-xyuh-45b6-a063-0bdcac28ea53";"CPU Upgrade";;server.yournet.com;gm:vmw;HOSTNAME

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*