Sending data to external consumers

BMC Discovery can send data to an external consumer such as Apache Kafka, so that you can perform additional bespoke processing according to your business requirements

Currently BMC Discovery sends Directly Discovered Data (DDD) to Apache Kafka. DDD is the raw output of the commands and patterns that BMC Discovery runs on discovery targets. DDD is primarily intended for consumption in BMC Discovery where it is used to build the inferred model of your IT environment. The inferred model, for example software instance nodes, is not sent.

After you configure the Apache Kafka connection, DDD from all scans, including that from appliances consolidating to the local appliance, is automatically sent to Kafka after the scan of the endpoint is completed.

Related topics

Model summary

Directly discovered data

Inferred model

Apache Kafka Open link

To send data to Apache Kafka

When you stream data to Apache Kafka, the consumer, you must specify the following:

  • Kafka brokers, which are the connection points to the Apache Kafka system.
  • A topic, which the brokers listen on.
  • The type of authentication to use.

To send data to Apache Kafka:

  1. From the main menu, click the Administration icon, and then select External Consumers.

  2. Click + Add to add a consumer and add the following details:

    1. Specify the broker or brokers, as a comma separated list of host:port pairs. 
    2. (Optional) Provide a description of the consumer. 
    3. Specify the topic under which to publish data. For example, discovery. If the topic is not defined in Kafka, and your Kafka system is configured with auto.create.topics.enable, the topic is created automatically when the scan starts.
    4. Specify one of the following options of authentication and security (SASL):
      • No Authentication—this option is insecure, and recommended only for lab testing.
      • PLAIN—this option requires a username and password.
      • SSL—this option requires a certificate and optionally, a supplied certificate authority.
      • SSL and PLAIN—this option requires a certificate and optionally supplied certificate authority, and a username and password.
    5. For PLAIN authentication, enter a username and the corresponding password for access to Kafka.
    6. (Optional) To use SSL connections, select the SSL check box, and provide a Certificate file. You might also have to provide a CA file.
    7. Click Choose File and use the file browser to find the file.
    8. Select the file, and click Open to upload it to BMC Discovery.
  3. When you have entered the connection parameters, click Test Connection to test the connection to Kafka. 
  4. Click Save to save the configuration.
    The External Consumers page is refreshed to show a summary of the new connection:

After the connection to Apache Kafka has been set up, data is automatically sent to the Kafka after the scan of an endpoint is complete. You can view the data as JSON files on the configured topic.

The default Kafka message size is 1 MB. The discovered data is split into chunks, and you must combine them into a single file.

To manage connections

The External Consumers window provides a list of configured connections to which discovery data is sent. Each one has an Action menu with the following options:

  • Test Connection—tests the connection. If the connection is working, it displays a Connection successful banner. If the connection is not working, it displays a Connection failed banner.
  • Delete—deletes the connection.
  • Edit—displays the Edit window from which you can edit any of the connection parameters.

To combine Apache Kafka messages

After BMC Discovery completes the scan of an endpoint, it generates one or more messages. BMC Discovery splits each message into multiple chunks, to keep within the maximum message size permitted by Apache Kafka. When you receive messages from Apache Kafka, they must be recombined to form the original discovery data. Each message has a message ID and each chunk in that message contains the same message ID.

Message chunk structure

 The message chunk structure is JSON of the form:

 {"index": 0, "total_msg_count": 2, "data": "chunk data"}


Where: 

  • index—the index of the chunk
  • total_msg_count—how many chunks are in the message
  • data—discovered metadata

To combine message chunks

The following procedure assumes that you have subscribed to the Apache Kafka topic you are using for BMC Discovery and have downloaded the entire message.

  1. Collect all the chunks with same message ID.
  2. Sort chunks by Index, from smallest to largest.
  3. Append the metadata from the data property of each chunk, ordered by index. Combine it to one string.
  4. Deserialize the JSON string into a dictionary/map object.
Was this page helpful? Yes No Submitting... Thank you

Comments