Getting started with an example pipeline
After you create a sandbox integration environment by using the Custom Integration template, you can run the example pipeline that is available out-of-the-box to get started quickly. The sandbox integration environment has the example_pipeline directory as shown in the following image:
The example_pipeline directory has the nagios_events_pipeline.conf event pipeline configuration file. When you run this pipeline configuration file, events data is read from the nagios_input.json file, formatted as per the mapping details defined in the nagios_event_mapping.json file, and the transformed events data is sent toBMC Helix Platform.
The following section helps you to run the nagios_events_pipeline.conf file by using the Custom Integration template.
Before you begin
Ensure the following:
To run the example pipeline file
- Log on to the host computer on which you have installed the BMC Helix Developer Tools Connector. For example, Test_host.
- Go to the /opt/bmc/connectors/<connector-name>/data/<integration ID>/pipeline/example_pipeline/config directory. Example connector name is Custom_integration_demo. To know how to identify the integration id, host name, connector name details of the sandbox integration, see Oct_2021.
Run the following command to verify the current status of the Connector:
tail -f /opt/bmc/connectors/<connector-name>/logs/fluent.logThe following output is displayed, which conveys that the Connector is up and running and logging the heartbeat periodically.
- Select the Configured Integrations tab.
- Click the action menu of the sandbox integration that you have created, and select Edit.
- Select Events from the Entity Name drop-down list, and enter the nagios_events_pipeline.conf file name in the Entity Configuration File text box.
- Click Update.
- If the pipeline configuration is applied successfully, you will start seeing the following output:
- Do the following steps to view the count of ingested events:
- Go to the Configured Integrations tab on the BMC Helix Developer Tools console.
- Locate the configured integration tile that you created by using the Custom Integration template. For example, Custom_Integration_Demo. You will be able to see the count of ingested events on the configured tile as shown in the following example:
- Log on to the BMC Helix Operations Management console, and go to Monitoring > Events to view the ingested events.
- Do the following steps to view the count of ingested events:
Pipeline customization examples
To understand the pipeline configuration files better and to learn its functionality, explore these configuration files in detail, modify them, and run.
Example customization for the nagios_input.json file
The nagios_input.json file has a few input directives, such as the hostname from where to fetch the input data and a few other parameters. To experiment, you can modify any of these input parameters and save the file as explained in the following steps:
- Go to the /opt/bmc/connectors/<connector name>/data/<integration ID>/pipeline/example_pipeline/config/ directory.
- Using a text editor, open the nagios_input.json file.
By default, the nagios_input.json file has the following content.
{"HOSTNAME" : "www.bmc.com","Custom_key_PP" : "PING_status=1","HOSTOUTPUT":"PING OK - Packet loss = 0%, RTA = 222.15 ms","SERVICEDESC" : "PING","SERVICESTATE" : "WARNING","SERVICEPERFDATA" : "rta=221.057999ms;100.000000;500.000000;0.000000 pl=0%;20;60;0","SERVICESTATETYPE" : "","SERVICEEXECUTIONTIME" : "","SERVICELATENCY" : "","SERVICEOUTPUT" : "","SERVICEPERFDATA":"www.bmc.com0"}
{"HOSTNAME" : "www.timesofindia.indiatimes.com","Custom_key_PP" : "PING_status=1","HOSTOUTPUT":"PING OK - Packet loss = 0%, RTA = 230.57 ms","SERVICEDESC" : "PING","SERVICESTATE" : "WARNING","SERVICEPERFDATA" : "rta=230.264008ms;100.000000;500.000000;0.000000 pl=0%;20;60;0","SERVICESTATETYPE" : "","SERVICEEXECUTIONTIME" : "","SERVICELATENCY" : "","SERVICEOUTPUT" : "","SERVICEPERFDATA":"www.timesofindia.indiatimes.com0"}
{"HOSTNAME" : "www.timesofindia.indiatimes.com","Custom_key_PP" : "PING_status=1","HOSTOUTPUT":"PING OK - Packet loss = 0%, RTA = 230.57 ms","SERVICEDESC" : "PING","SERVICESTATE" : "WARNING","SERVICEPERFDATA" : "rta=230.264008ms;100.000000;500.000000;0.000000 pl=0%;20;60;0","SERVICESTATETYPE" : "","SERVICEEXECUTIONTIME" : "","SERVICELATENCY" : "","SERVICEOUTPUT" : "","SERVICEPERFDATA":"www.timesofindia.indiatimes.com0"}Modify the content. A sample modification is shown in the following example. Here the PING OK label is changed to PING OK Demo:
- Save the file.
Example customization for the nagios_event_mapping.json file
The nagios_event_mapping.json file has defined the mapping details between input and output. To explore, you can modify any of these mapping details or add new mapping details and save the file as explained in the following steps:
- Go to the /opt/bmc/connectors/<connector name>/data/<integration ID>/pipeline/example_pipeline/config/ directory.
- Using a text editor, open the nagios_event_mapping.json file.
By default, the nagios_event_mapping.json file has the following content:
{
"NagiosMappingDetails": [
{
"inputkey": "vl-aus-domdv019.bmc.com",
"outputkey": "source_identifier",
"type": "constant"
},
{
"inputkey": "OPEN",
"outputkey": "status",
"type": "constant"
},
{
"inputkey": "HOSTOUTPUT",
"outputkey": "msg",
"type": "assignment"
},
{
"inputkey": "SERVICESTATE",
"outputkey": "severity",
"type": "assignment"
}
]
}Modify or add the mapping details. A sample modification is shown in the following example. Here, an additional mapping for the SERVICEDESC input key is added:
- Save the file.
To reload the configurations to reflect the changes
Do the following steps to reload the configurations to reflect the configuration file customizations.
Go to the /opt/bmc/connectors/<connector name>/data/<integration ID>/pipeline/example_pipeline/ directory and run the following command:
#Syntax
docker exec -it <Container ID> bash reload.sh
#Example
docker exec -it deed817ccb72 bash reload.sh
#In the preceding example, deed817ccb72 is the container id.
Understand the nagios_events_pipeline.conf file structure
The nagios_events_pipeline.conf file has the configurations to run the events pipeline, which will get the data from the specified input resource, modify the data as per the mapping configurations, and send the data to
.
The following code block displays the nagios_events_pipeline.conf file content. There are three important directives in this file:
@type tail
tag Generic_demo_test
path /fluentd/etc/data/<integration ID>/pipeline/example_pipeline/config/nagios_input.json
pos_file /fluentd/etc/data/3da37401-bbb8-4ff8-aeee-ebbd2623b12a/pipeline/example_pipeline/config/nagios_input.json.pos
read_from_head true
refresh_interval 0
read_lines_limit 1
enable_stat_watcher false
format json
</source>
<filter Generic_demo_test>
@type bmc_ade_transformer
result_in_array true
mapping_file /fluentd/etc/data/<integration ID>/pipeline/example_pipeline/config/nagios_event_mapping.json
mapping_json_key NagiosMappingDetails
ignore_blank_values true
</filter>
<filter Generic_demo_test>
@type stdout
</filter>
<match Generic_demo_test>
@type bmc_ade_http
endpoint_url ${{CONNECTOR_ENV.ADE_BASE_URL}}/events-service/api/v1.0/events
ssl_no_verify true # default: false
use_ssl true
http_method post # default: post
serializer json # default: form
rate_limit_msec 0 # default: 0 = no rate limiting
raise_on_error true # default: true
recoverable_status_codes 503 # default: 503
config_json_file_path /fluentd/etc/data/config/ade_config.json
format json
audit_payload {"labels":{"metricName":"Events","hostname":"${{CONNECTOR_ENV.HOST_NAME}}","entityId":"${{CONNECTOR_ENV.AGENT_ID}}","entityType":"Connector","entityName":"${{CONNECTOR_ENV.AGENT_NAME}}","IntegrationID":"3da37401-bbb8-4ff8-aeee-ebbd2623b12a","IntegrationName":"Generic_demo_test","unit":"Nos.","source":"NAGIOS"},"samples":[{"value":0,"timestamp":0}]}
</match>
The following section explains the different directives and plug-ins used in this configuration file and its functionality:
Input directive — <source> </source>
The source directive is used to specify the source details such as the type of the plugin used, path, and user credentials. In this example, the path of the nagios_input.json and nagios_input.json.pos files are specified.
@type tail
tag Generic_demo_test
path /fluentd/etc/data/<integration ID>/pipeline/example_pipeline/config/nagios_input.json
pos_file /fluentd/etc/data/3da37401-bbb8-4ff8-aeee-ebbd2623b12a/pipeline/example_pipeline/config/nagios_input.json.pos
read_from_head true
refresh_interval 0
read_lines_limit 1
enable_stat_watcher false
format json
</source>
Filter directive — <filter> </filter>
The filter directive specifies the sample data transformation plug-in that is used to convert data from a third-party product to the
compatible format. In this example, the nagios_event_mapping.json file is specified as the mapping file.
@type bmc_ade_transformer
result_in_array true
mapping_file /fluentd/etc/data/<integration ID>/pipeline/example_pipeline/config/nagios_event_mapping.json
mapping_json_key NagiosMappingDetails
ignore_blank_values true
</filter>
Output directive — <match> </match>
The match directive is used to push the transformed data into
.
@type bmc_ade_http
endpoint_url ${{CONNECTOR_ENV.ADE_BASE_URL}}/events-service/api/v1.0/events
ssl_no_verify true # default: false
use_ssl true
http_method post # default: post
serializer json # default: form
rate_limit_msec 0 # default: 0 = no rate limiting
raise_on_error true # default: true
recoverable_status_codes 503 # default: 503
config_json_file_path /fluentd/etc/data/config/ade_config.json
format json
audit_payload {"labels":{"metricName":"Events","hostname":"${{CONNECTOR_ENV.HOST_NAME}}","entityId":"${{CONNECTOR_ENV.AGENT_ID}}","entityType":"Connector","entityName":"${{CONNECTOR_ENV.AGENT_NAME}}","IntegrationID":"3da37401-bbb8-4ff8-aeee-ebbd2623b12a","IntegrationName":"Generic_demo_test","unit":"Nos.","source":"NAGIOS"},"samples":[{"value":0,"timestamp":0}]}
</match>
For more information, see bmc-ade-http plug-in.
Where to go from here
After you have explored the example pipeline configuration files, create a new custom pipeline configuration file to ingest data from a product that you plan to integrate.