Collecting events from network devices and interfaces

Collect events from network devices or interfaces by using Logstash and send those events to BMC Helix Operations Management. 

Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."  For more information, see the Logstash documentation. Open link  

With BMC Helix Operations Management, you can achieve the following major goals:

  • Collect data to monitor your infrastructure environment
  • Monitor events and reduce event noise
  • Detect anomalies in the system
  • Manage maintenance windows
  • Gain insight into the system with logs
  • Monitor and investigate situations

For more information, see the BMC Helix Operations Management documentation Open link

Data collection mechanism

The following figure shows how events are collected and sent by using various Logstash plugins, which are used to run pipelines:

Supported operating system

CentOS 7.x

Supported source types

The following source types are supported:

  • AMQP
  • JDBC
  • JMS
  • SNMP
  • CORBA

Collecting events and sending to BMC Helix Operations Management

Various Logstash input, filter, and output plugins help you build event processing pipelines, which ingest data from a multitude of sources, transform it, and then send it to BMC Helix Operations Management. 

See the following sections for instructions on building and running pipelines:

Before you begin

Before you start with event collection, perform the following tasks:

  • Copy the API key of your BMC Helix Operations Management tenant and store it in a text file. 
  • Download and install Logstash. For instructions, see the Logstash documentation Open link .

    Logstash is installed in the /usr/share/logstash directory. The configuration files are stored in the /etc/logstash directory.

  • Install the plugin required for each source type on the host where Logstash is installed:

    Source typePlugin typeCommand
    AMQPRabbitMQ input

    <Logstash_INSTALL_DIR>/bin/logstash-plugin install logstash-input-rabbitmq

    JDBCJDBC input<Logstash_INSTALL_DIR>/bin/logstash-plugin install logstash-input-jdbc
    JMSJMS input <Logstash_INSTALL_DIR>/bin/logstash-plugin install logstash-input-jms
    SNMPSNMP Trap input<Logstash_INSTALL_DIR>/bin/logstash-plugin install logstash-input-snmptrap
    CORBARabbitMQ input<Logstash_INSTALL_DIR>/bin/logstash-plugin install logstash-input-rabbitmq

Building and running the AMQP pipeline

Perform the following tasks to build and run the AMQP pipeline:

  1. Download the files.
  2. Understand the logstash-amqp-pipeline.conf file structure.
  3. Run the AMQP pipeline.

Task 1: Download the files

File typeFile nameLocation

Pipeline configuration file 

logstash-amqp-pipeline.conf

Contact BMC Customer Support.
Ruby scripts 

send-1-message.rb
send-1000-messages.rb

Contact BMC Customer Support.

Task 2: Understand the logstash-amqp-pipeline.conf file structure

The logstash-amqp-pipeline.conf file has the configurations to run the events pipeline, which gets data from the specified input resource, modifies the data according to the mapping configurations, and sends the data to BMC Helix Operations Management.

  • input section 
  • filter section
  • output section

input section

This section contains the source details such as the plugin used (rabittmq), path, and user credentials to connect to a RabbitMQ source. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html Open link .

input section
input {
  rabbitmq {
    id => "rabbitmq_logs"
    host => "<hostname or IP Address of the rabbitmq host>"
    port => <rabbitmq port>
    user => "guest"
    password => "guest"
    vhost => "/"
    queue => "<queue name>"
    ack => false
  }
}

The plugin connects to the RabbitMQ source using the host name or IP address and the port, and listens to the incoming messages on the queue specified by the queue property. The following sample JSON is used as an input message:

Input JSON
{
    "alertId": "5f0nbmt26580-e678",
    "resourceId": "e478hya-ftyui4",
    "alertLevel": "WARNING",
    "type": "16",
    "subType": "20",
    "status": "ACTIVE",
    "startTimeUTC": 1663140113064,
    "cancelTimeUTC": 0,
    "updateTimeUTC": 1663140113064,
    "suspendUntilTimeUTC": 0,
    "controlState": "OPEN",
    "alertDefinitionId": "AlertDefinition-VMWARE-GuestOutOfDiskSpace",
    "alertDefinitionName": "One or more virtual machine guest file systems are running out of disk space",
    "alertImpact": "HEALTH",
    "links": [
        {
            "href": "/suite-api/api/alerts/e3e3f05f-e8b9-4cfb-8c1e-5f0e05826580",
            "rel": "SELF",
            "name": "linkToSelf"
        },
        {
            "href": "/suite-api/api/resources/ae061e70-f581-455b-8565-e4e8248a843a",
            "rel": "RELATED",
            "name": "alertOnResource"
        },
        {
            "href": "/suite-api/api/auth/users/",
            "rel": "RELATED",
            "name": "ownerOfAlert"
        },
        {
            "href": "/suite-api/api/alertdefinitions/AlertDefinition-VMWARE-GuestOutOfDiskSpace",
            "rel": "RELATED",
            "name": "problemDefinitionForAlert"
        }
    ]
}

filter section

This section receives inputs from the input section. It contains details to transform the data and map the data to the BMC Helix Operations Management slots. 

The section starts with the JSON parsing filter plugin. It takes an existing field which contains JSON and expands it into an actual data structure within the Logstash event. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html Open link . 

filter section - json plugin
  json {
    source => "message"
  }

Next, the mutate plugin adds the mandatory BMC Helix Operations Management slots, and renames some of the JSON fields to the BMC Helix Operations Management slots. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html Open link . 

filter section - mutate plugin
  mutate {
    # Add required BHOM fields
    add_field => {
      "class"             => "MonitorEvent"
      "source_identifier" => "logstash_source"
    }

    # Rename the field names as there is no change in the values
    rename => {
      "alertId"             => "source_eventid"
      "updateTimeUTC"       => "source_originTimestamp"
      "alertDefinitionId"   => "details"
      "alertDefinitionName" => "msg"
      "resourceId"          => "source_hostname"
    }
  }

The mutate plugin adds the severity slot to the output JSON based on the value of the alertLevel property of the input JSON.

filter section - mutate plugin
  # MAP alertLevel to severity
  if [alertLevel] == "CRITICAL" {
    mutate {
      add_field => {"severity" => "CRITICAL"}
    }
  } else if [alertLevel] == "IMMEDIATE" {
    mutate {
      add_field => {"severity" => "MAJOR"}
    }
  } else if [alertLevel] == "WARNING" {
    mutate {
      add_field => {"severity" => "WARNING"}
    }
  } else if [alertLevel] == "INFO" {
    mutate {
      add_field => {"severity" => "INFO"}
    }
  } else {
    mutate {
      add_field => {"severity" => "CRITICAL"}
    }
  }

The mutate plugin updates the value of status field in the output JSON based on the value of the status and controlState properties in the input JSON.

filter section - mutate plugin
  # Map VROPS status and controlState to BHOM status
  if [status] == "ACTIVE" {
    if [controlState] == "OPEN" {
      mutate {
        update => {"status" => "OPEN"}
      }
    } else if [controlState] == "ASSIGNED" {
      mutate {
        update => {"status" => "ASSIGNED"}
      }
    } else if [controlState] == "SUSPENDED" {
      mutate {
        update => {"status" => "ACK"}
      }
    } else if [controlState] == "SUPPRESSED" {
      mutate {
        update => {"status" => "CLOSED"}
      }
    } else {
      mutate {
        update => {"status" => "OPEN"}
      }
    }
  } else if [status] == "CANCELED" {
    mutate {
      update => {"status" => "CLOSED"}
    }
  } else {
    mutate {
      update => {"status" => "OPEN"}
    }
  }

Finally, the mutate plugin removes the unwanted properties from the output JSON.

filter section - mutate plugin
mutate {
    remove_field => ["links", "original", "@version", "@timestamp", "event", "resourceId", "alertLevel", "controlState", "type", "subType", "startTimeUTC", "cancelTimeUTC", "suspendUntilTimeUTC", "alertImpact", "links"]
  }

output section

The output section contains details of the http plugin, which sends the transformed JSON to BMC Helix Operations Management. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-outputs-http.html Open link . 

output section
  http {
    url              => "https://hostA.abc.com/events-service/api/v1.0/events"
    http_method      => "post"
    content_type     => "application/json"
    format           => "json_batch"
    retry_failed     => false
    http_compression => true
    headers => {
      "Content-Type"  => "application/json"
      "Authorization" => "apiKey 3xct9-8p60"
      "Accept"        => "application/json"
    }
  }

Task 3: Run the AMQP pipeline

  1. Copy the pipeline configuration file (logstash-amqp-pipeline.conf) to the /etc/logstash/conf.d directory on the Logstash host.
  2. Open the file with a text editor.
  3. In the input section, update the values of the RabbitMQ hostname, port and queue properties.
  4. In the output section, update the connection parameters for the http plugin.
  5. Navigate to the /etc/logstash directory and add the following lines to the pipelines.yml file:

    - pipeline.id: amqp_pipeline
      path.config: "/etc/logstash/conf.d/logstash-amqp-pipeline.conf"
  6. Navigate to the /usr/share/logstash/bin directory and run Logstash using the following command: 
    ./logstash --path.settings /etc/logstash/
  7. Ingest a message into RabbitMQ by using the ruby script, send-1-message.rb
    /usr/share/logstash/bin/ruby send-1-message.rb
  8. (Optional) To perform the load testing, use the send-1000-messages.rb script.

Building and running the JDBC pipeline

Perform the following tasks to build and run the JDBC pipeline:

  1. Download the files.
  2. Understand the logstash-jdbc-pipeline.conf file structure.
  3. Run the JDBC pipeline.

Task 1: Download the files 

File typeFile nameLocation

Pipeline configuration file 

logstash-jdbc-pipeline.conf

Contact BMC Customer Support.

jar file

postgresql-9.3-1101.jdbc4

Contact your database administrator.

Task 2: Understand the logstash-jdbc-pipeline.conf file structure

The logstash-jdbc-pipeline.conf file has the configurations to run the events pipeline, which gets the data from the specified input resource, modifies the data according to the mapping configurations, and sends the data to BMC Helix Operations Management.

  • input section
  • filter section
  • output section

input section

This section contains the source details such as JDBC plugin details and user credentials, which are required to connect to a JDBC source. For more information about the plugin, see
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html Open link

input section
input {
  jdbc {
    jdbc_driver_library => "${jdbc_library:/tmp/postgresql-9.3-1101.jdbc4.jar}"
    jdbc_driver_class => "${jdbc_class:org.postgresql.Driver}"
    jdbc_connection_string => "${jdbc_url:jdbc:postgresql://<hostName>:5432/<databaseName>}"
    jdbc_user => "${jdbc_user:<userName>}"
    jdbc_password => "${jdbc_pass:<password>}"

    schedule => "* * * * *"
    statement => "SELECT schemaid as object, name as msg, '<hostName>' as source_identifier, viewname as details from <tableName> order by schemaid"
  }
}

The schedule is in the following format 'min hour day month timezone'. * indicates 'any'. Therefore, a * in each place means data will be collected every minute of every hour of every day of every month. You can specify the value as required.

The SQL statement is executed based on the schedule defined. The statement needs to pull back whatever information is important for the event with a proper qualification. The values pulled can be mapped as mentioned in the example (schemaid as object), or later renamed and further transformed in the filter section to put them into the proper element names that BMC Helix Operations Management requires.

filter section

This section receives inputs from the input section. It contains details to transform the data and map the data to the BMC Helix Operations Management slots. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html Open link . 

filter section
#this section is used to setup the data fields so that they have default values and such
  if ![class] or [class] == 0 {
    mutate {add_field => {"class" => "EVENT"}}
  }
  if ![severity] or [severity] == 0 {
    mutate {add_field => {"severity" => "MINOR"}}
  }
  if ![object] or [object] == 0 {
    mutate {add_field => {"object" => "Unknown"}}
  }
  if ![msg] or [msg] == 0 {
    mutate {add_field => {"msg" => ""}}
  }
  if ![source_identifier] or [source_identifier] == 0 {
    mutate {add_field => {"source_identifier" => "Unknown Source"}}
  }
  if ![status] or [status] == 0 {
    mutate {add_field => {"status" => "OPEN"}}
  }
  if ![category] or [category] == 0 {
    mutate {add_field => {"category" => "OPERATIONS_MANAGEMENT"}}
  }
  if ![sub_category] or [sub_category] == 0 {
    mutate {add_field => {"sub_category" => ""}}
  }
  if ![priority] or [priority] == 0 {
    mutate {add_field => {"priority" => "PRIORITY_5"}}
  }
  if ![details] or [details] == 0 {
    mutate {add_field => {"details" => ""}}
  }
  if ![source_hostname] or [source_hostname] == 0 {
    mutate {add_field => {"source_hostname" => ""}}
  }
  if ![source_port] or [source_port] == 0 {
    mutate {add_field => {"source_port" => ""}}
  }
  if ![source_address] or [source_address] == 0 {
    mutate {add_field => {"source_address" => ""}}
  }
  if ![alias] or [alias] == 0 {
    mutate {add_field => {"alias" => ""}}
  }

#this section provides 'batch processing'. Instead of sending each event to the output one at a time, it makes batches and then sends them, thus cutting down on the overall connections to the output area, making it faster.
  
  #aggregate plugin requires a 'task_id' to aggregate things, being these events don't have anything to be related about, we are creating a fake task_id of 1 to do the aggregation on
  mutate { 
    add_field => { "[@metadata][task_id]" => 1 }
  }
  
#this aggregate section takes the current event and stuffs it into a 'data' hash, setting a timeout of 5 seconds...if this task_id doesn't receive another event in 5 seconds it creates a new event

#using the contents of this one allowing it to be processed
  aggregate {
    task_id => "%{[@metadata][task_id]}"
    code => '
      map["data"] ||= []
      map["data"] << event.to_hash
      event.set("data", map["data"])
    '
    push_map_as_event_on_timeout => true
    timeout => 5
  }
  
 #if the data hash exists, we run some ruby code to determine how many elements are in it and store that value in a field named 'event_length', if data doesn't exist, we create it and set the value to 0

  if [data] {
    ruby {
      code => 'event.set("event_length", event.get("data").length)'
    }
  } else {
    mutate {add_field => {"event_length" => 0}}
  }
  
  #if our count is < 100, we 'cancel' this event, which means it never reaches the output stage and we move onto the next event
  if [event_length] <= 100 {
    aggregate {
      task_id => "%{[@metadata][task_id]}"
      code => 'event.cancel'
    }
  } else { # if however we are over 100 events, we send this batch over to the output, marking the aggregation 'done', starting a new one
    aggregate {
      task_id => "%{[@metadata][task_id]}"
      code => ''
      map_action => "update"
      end_of_task => true
    }
  }

  # Here we clear the current event of everything other than the data array, technically unnecessary, but I like a clean structure
  prune {
    whitelist_names => ["data"]
  }

  # this plugin converts the logstash formatted data hash into a json formatted string to be used when sending everything over to BHOM
  # if you don't have this plugin, you can run this command bin/logstash-plugin install logstash-filter-json_encode
  json_encode {
    source => "data"
  }

output section

This section contains details of the http plugin required to send the transformed data to BMC Helix Operations Management. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-outputs-http.html Open link . 

output section
 #the http plugin is used to send the data over to BMC Helix Operations Management. The host and api key are variables that can be set in the environment if required
 #the format is 'message' because we don't want Logstash doing anything with what's already been setup, and the message is the 'data' array that we converted to encoded JSON above.

  http {
    url                 => "https://${bhom_host:helixdev.abc.com}/events-service/api/v1.0/events"
    http_method            => "post"
    content_type        => "application/json"
    format                => "message"
    message                => "%{data}"
    retry_failed        => false
    http_compression    => true
    headers                => {
      "Content-Type"    => "application/json"
      "Authorization"    => "apiKey ${api_key:456x789-ba8c}"
    }
  }

Task 4: Run the JDBC pipeline

  1. Copy the pipeline configuration file (logstash-jdbc-pipeline.conf) to the /etc/logstash/conf.d directory on the Logstash host.
  2. Copy the jdbc jar file (postgresql-9.3-1101.jdbc4) into a directory on the file system.
  3. In the input section, update jdbc_driver_library with the path of the jdbc jar file, hostName in jdbc_connection_string, user name and password.
    You can specify all of the 'variable' values in the configuration file in environment variables. In the configuration file, the format is ${variable:default}. Therefore, in the example ${jdbc_user:MyUser}, if you set an environment variable jdbc_user, the configuration uses that, otherwise it uses MyUser. You can modify the configuration file directly, or specify the environment variables. The following configuration variables are available:
    • jdbc_library
    • jdbc_class
    • jdbc_url
    • jdbc_user
    • jdbc_pass
    • bhom_host
    • api_key
  4. In the output section, update the connection parameters of the http plugin.
  5. Navigate to the /etc/logstash directory and add the following lines to the pipelines.yml file:

    - pipeline.id: jdbc_pipeline
      path.config: "/etc/logstash/conf.d/logstash-jdbc-pipeline.conf"
  6. Navigate to the /usr/share/logstash/bin directory and run Logstash by using the following command:
    ./logstash --path.settings /etc/logstash/
    The plugin receives the message from JDBC source and sends events to BMC Helix Operations Management.

Building and running the JMS pipeline

Perform the following tasks to build and run the JMS pipeline:

  1. Download the files.
  2. Configure ActiveMQ.
  3. Understand the logstash-jms-pipeline.conf file structure.
  4. Run the JMS pipeline.

Task 1: Download the files 

File typeFile nameLocation

Pipeline configuration file 

logstash-jms-pipeline.conf

Contact BMC Customer Support.

Task 2: Configure ActiveMQ

  1. Open the <ActiveMQ_INSTALL_DIR>/conf/jetty.xml file with a text editor.
  2. To configure the ActiveMQ web console host, change the host value from 127.0.0.1 to 0.0.0.0, as shown below:

    <bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
                 <!-- the default port number for the web console -->
            <property name="host" value="0.0.0.0"/>
            <property name="port" value="8161"/>
    </bean>
  3. Start ActiveMQ:

    cd [activemq_install_dir]/bin
    ./activemq start

      

  4. Copy the activemq jar file from the machine where the activemq binaries are installed to the /usr/share/jms directory on the machine where the JMS pipeline is running.

Task 3: Understand the logstash-jms-pipeline.conf file structure

The logstash-jms-pipeline.conf file has the configurations to run the events pipeline, which gets the data from the specified input resource, modifies the data according to the mapping configurations, and sends the data to BMC Helix Operations Management. The file contains the following sections:

  • input section
  • filter section
  • output section

input section

This section contains the activemq source details such as the plugin used (JMS), path, and user credentials to connect to the activemq source. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html Open link

input section
input
 {
    jms {
        broker_url => 'failover:(tcp://hostA.bmc.com:61616)?initialReconnectDelay=100'
        destination => 'jmslog'
        factory => 'org.apache.activemq.ActiveMQConnectionFactory'
        username => 'admin'
        password => 'admin'
        include_headers => false
        include_properties => false
        include_body => true
        require_jars => ['/usr/share/jms/activemq-all-5.16.4.jar']
    }
}

filter section

This section receives inputs from the input section. It contains details to transform the data and map the data to the BMC Helix Operations Management slots.  For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html Open link . 

filter section
filter {

 mutate {
    # Add required BHOM fields
    add_field => {
          "class"             => "MonitorEvent"
          "source_identifier" => "logstash_jms_source"
          "source_eventid" => "e9620f353-6d95"
          "source_originTimestamp" => "1663486515375"
          "details" => "AlertDefinition-VMWARE-GuestOutOfDiskSpace"
          "msg" => "%{[message]}"
          "source_hostname" => "87865xybcd"
          "severity" => "CRITICAL"
          "status" => "OPEN"
    }
 }
 mutate {
   remove_field => [ "original", "@version", "@timestamp", "event",tags]
  }

}

output section

This section contains details of the http plugin required to send the transformed data to BMC Helix Operations Management. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-outputs-http.html Open link . 

output section
output {
        http {
                url              => "https://hostA.bmc.com/events-service/api/v1.0/events"
                http_method      => "post"
                content_type     => "application/json"
                format           => "json_batch"
                retry_failed     => false
    http_compression => true
    headers => {
      "Content-Type"  => "application/json"
      "Authorization" => "apiKey 8a60-672a1"
      "Accept"        => "application/json"
    }
  }
}

Task 4: Run the JMS pipeline

  1. Copy the pipeline configuration file (logstash-jms-pipeline.conf) to the /etc/logstash/conf.d directory on the Logstash host.
  2. Open the logstash-jms-pipeline.conf file with a text editor.
  3. In the input section, update the values of the activemq hostname (in broker_url), destination (queue name), require_jars, username, and password.
  4. In the output section, update the connection parameters for the http plugin.
  5. Navigate to the /usr/share/logstash/bin directory and start the JMS pipeline using the following command:
    ./logstash -f /etc/logstash/conf.d/jms-logstash.conf --config.reload.automatic
  6. Log on to ActiveMQ web console (http:<<hostname>>:8161/admin/queues.jsp).
  7. Enter admin as the user name and password.
  8. Send a message to ActiveMQ from the send tab on the ActiveMQ web console. A sample message looks like the following:

    {
          "@version" => "1",
        "@timestamp" => 2022-10-14T19:12:53.516327580Z,
           "message" => "test",
             "event" => {
            "original" => "test"
        }
    }

    You can send one or multiple messages from the web console. For the sample procedure, see  https://learn-it-with-examples.com/middleware/other/activemq/send-message-activemq-queue.html Open link .

Building and running the SNMP pipeline

Perform the following tasks to build and run the SNMP pipeline:

  1. Download the files.
  2. Understand the logstash-snmp-pipeline.conf file structure.
  3. Run the SNMP pipeline.

Task 1: Download the files 

File typeFile nameLocation

Pipeline configuration pull file 

logstash-snmp-pull-pipeline.conf

Contact BMC Customer Support.

Pipeline configuration push file 

logstash-snmp-push-pipeline.conf

Contact BMC Customer Support.

Task 2: Understand the logstash-snmp-pipeline.conf file structure

The logstash-snmp-pipeline.conf file has the configurations to run the events pipeline, which gets data from the specified input resource, modifies the data according to the mapping configurations, and sends the data to BMC Helix Operations Management.

  • input section
  • filter section
  • output section

input section

This section contains the RabbitMQ source details such as rabbitmq plugin details and user credentials, which are required to connect to the RabbitMQ source. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-inputs-snmptrap.html Open link

input section - SNMP Trap (pull)
input {
  snmptrap {
        id => "<plugin_id>" #strongly recommended
        host => "<host_for_listening_snmp_trap>"
        port => <port_for_listening_snmp_trap>
        community => [<it_comprises_user_credential>]
        codec => plain
  }
}
input section - SNMP Trap (push)
input {
  snmp {
    get => ["1.3.6.1.2.1.1.3.0"]
    hosts => [{host => "udp:0.0.0.0/161" community => "public"}]
  }

filter section

This section receives inputs from the input section and transforms the data and maps the data to the BMC Helix Operations Management slots. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html Open link . 

filter section
filter { 
  mutate {
    # Add required BHOM fields
    add_field => {
      "class"             => "MonitorEvent"
      "source_identifier" => "logstash_source"
    }
  }
 
  if ![msg] or [msg] == 0 {
    mutate {add_field => {"msg" => '%{message}'}}
  }
}

output section

This section contains details of the http plugin required to send the transformed data to BMC Helix Operations Management. For more information about the plugin, see  https://www.elastic.co/guide/en/logstash/current/plugins-outputs-http.html Open link . 

output section
output {
  stdout {}
  http {
    url              => "https://hostA.bmc.com/events-service/api/v1.0/events"
    http_method      => "post"
    content_type     => "application/json"
    format           => "json_batch"
    retry_failed     => false
    http_compression => true
    headers => {
      "Content-Type"  => "application/json"
      "Authorization" => "apiKey a87b6e0xyu881a3-56try9-"
      "Accept"        => "application/json"
    }
  }
}

Task 4: Run the SNMP pipeline

  1. Configure the SNMP agent on remote server.
  2. Copy the pipeline configuration file (logstash-snmp-pipeline.conf) to the /etc/logstash/conf.d directory on the Logstash host.
  3. Open the file with a text editor.
  4. In the input section, update the values of the plugin ID, hostname, port, and community properties.
  5. In the output section, update the connection parameters for the http plugin.
  6. Navigate to the /usr/share/logstash/bin directory and run Logstash by using the following command:
    ./bin/logstash -f ./pipeline/logstash-snmp-pipeline.conf --verbose --config.reload.automatic
    The plugin receives the message from JDBC source and sends events to BMC Helix Operations Management.
  7. Send traps using the MIB browser.

Building and running the CORBA pipeline

You can develop this pipeline by using one of the following approaches:

  • CORBA > rabbitmq > Logstash > BMC Helix Operations Management
  • CORBA > Logstash > BMC Helix Operations Management

Perform the following tasks to build and run the CORBA pipeline:

  1. Download the files.
  2. Understand the pipeline.
  3. Run the pipeline.

Approach 1

Task 1: Download the files 

File typeFile nameLocation

Java CORBA client and server

  • Helloclient_Rabbitmq.jar
  • HelloServer.jar
Contact BMC Customer Support.

Pipeline configuration file

logstash-amqp-pipeline.confContact BMC Customer Support.

Task 2: Understand the pipeline

  1. The Java CORBA server runs and listens on a port.
  2. The Java CORBA client connects to the server on the port and calls a method implemented by the server.
  3. In response, the server sends a JSON string back to the client.
  4. The client then connects to the rabbitmq plugin and sends the JSON string to a queue in rabbitmq.
  5. The AMQP pipeline explained in an earlier section, which has subscribed to the same queue, picks up the message, transforms and sends it to BMC Helix Operations Management.

Task 2: Run the pipeline

  1. Run the naming service using the following command:

    orbd -ORBInitialPort 1050

     
    The orbd executable is located in the <JDK_INSTALL_DIR>/bin directory.

  2. Run the CORBA server using the following command:

    java -jar HelloServer.jar -ORBInitialPort 1050 -ORBInitialHost localhost
     
    HelloServer ready and waiting ...
  3. Run the AMQP pipeline
  4. Start the CORBA client using the following command:

    java -jar HelloClient.jar -ORBInitialPort 1050 -ORBInitialHost localhost

Approach 2

Task 1: Download the files 

File typeFile nameLocation

Java CORBA client and server

  • HelloServer.jar
  • HelloClient_Logstash.jar
Contact BMC Customer Support.

Pipeline configuration file

logstash-corba-input-http-pipeline.confContact BMC Customer Support.

Task 2: Understand the pipeline

  1. A Java CORBA client calls a method on the CORBA server and gets a JSON string in response.
  2. The clients sends this message to a Logstash pipeline that has the input http plugin listening on a port for http requests.
  3. The pipeline receives the message, transforms and sends it to BMC Helix Operations Management.

Task 2: Run the pipeline

  1. Copy the pipeline configuration file (logstash-corba-input-http-pipeline.conf) to the /etc/logstash/conf.d directory on the Logstash host.
  2. Run the naming service using the following command:

    orbd -ORBInitialPort 1050

     
    The orbd executable is located in the <JDK_INSTALL_DIR>/bin directory.

  3. Run the CORBA server using the following command:

    java -jar HelloServer.jar -ORBInitialPort 1050 -ORBInitialHost localhost
     
    HelloServer ready and waiting ...
  4. Navigate to the /etc/logstash directory and add the following lines to the pipelines.yml file:

    - pipeline.id: corba_pipeline
      path.config: "/etc/logstash/conf.d/logstash-corba-input-http-pipeline.conf"
  5. Navigate to the /usr/share/logstash/bin directory and run Logstash using the following command: 
    ./logstash --path.settings /etc/logstash/.
  6. Start the CORBA client using the following command:

    java -jar HelloClient.jar -ORBInitialPort 1050 -ORBInitialHost localhost
Was this page helpful? Yes No Submitting... Thank you

Comments