Configuring monitoring of Apache Kafka environment


Monitor and gather performance and metrics information about your environment, and then use this information to resolve issues proactively. After installing the PATROL Agent and knowledge modules (KMs), create a monitor policy. After the policy is enabled, data collection starts. For more information about monitor policies, see Defining monitor policies.

 

To configure monitoring of Apache Kafka environment

  1. Perform one of the following actions:
    • In BMC Helix Operations Managementselect Configuration > Monitor Policies > Create.
    • In TrueSight Operations Management, select Configuration > Infrastructure Policies > Create Policy.
  2. Click Add Monitoring Configuration.
  3. From the Monitor Solution list, select Kafka.
    The system automatically sets the values in the Version (latest), Monitor Profile, and Monitor type fields.
  4. In the Environment Configurations section, click Add.
  5. Add the monitoring configuration as described in the following table:
FieldDescription
Environment name

Enter a unique name for the environment where you discover Apache Kafka instances.

Use alphanumeric characters in the name (A-Z, a-z, 0-9).

Do not use any of the following characters:
![@#$%?{}^\\\/|+=\&*();']

Bootstrap ServiceName / Server(s) (Server1:port,Server2:port)

Enter the bootstrap server names in the format: servername:port.

Separate multiple server names by a comma.

For example, server1:9092server2:9092.

User-defined Cluster Name - Device

Enter the name that you want to assign to the Apache Kafka cluster. This name appears in the monitor policy as the device name.

For example, if you enter ApexKafka-Europe as the cluster name, all events are associated with the ApexKafka-Europe device.

Cluster Device Mapping

Expand the Cluster Device Mapping field to define the mapping between a cluster name and a device name for monitoring Apache Kafka:

  • Use Bootstrap Servicename—Set the cluster device name based on the server name you enter in the Bootstrap ServiceName / Server(s) (Server1:port,Server2:port) field. The name is taken from the initial server's host name, excluding the port number.
    For example, 
    if the Bootstrap Server(s) (Server1:port,Server2:port) field contains kafka-prod-01.service.cloud:9092 ​​​​, the device is created by using the kafka-prod-01.service.cloud device name.
  • User-defined Cluster Name—Enter a cluster name in the User-defined Cluster Name - device field. The name that you enter becomes the cluster name. The device is created by using this name.
  • DNS Lookup—Use this option when you enter a short host name in the User Defined Cluster Name – Device field. A DNS query is initiated to resolve the short name to its Fully Qualified Domain Name (FQDN), after which the device is created by using the resolved FQDN.
    For example, if you enter kafka-node1 in the User-Defined Cluster Name – Device field, it resolves to kafka-node1.prod.company.com, and the device is created by using the kafka-node1.prod.company.com name.
  • None—The device is not created. This option is the default option.
Information
Important

After modifying the cluster device mapping configuration, restart the PATROL Agent to apply the changes.

Broker Device

Expand the Broker Device field to enable the device mapping of the nodes:

  • Host name: The device is created using the host name, which is returned from the API.
  • DNS lookup: The device is created by resolving the DNS name.
  • None: The device is not created.

When you select one of the device mapping options, all the database instances are discovered under the database host device.

Endpoint Details

Select one of the following methods for metric collection:

  • JMX Endpoint: 
  • Aiven Endpoint
  • Metric Endpoint
JMX port to collect metrics

Enter the JMX port number. The JMX port is used to collect the performance metrics.

For example, 9999.

Important: Make sure that the JMX port number you use is open, available, and the same for all brokers.

Remote JMX authentication

Expand the Remote JMX authentication field to configure authentication details to connect to JMX:

  • User name: Enter the user name to connect to JMX.
  • Password: Enter the password to connect to JMX.
Metric Endpoint portEnter the metric endpoint port number. The default port number is 11001.
User NameEnter the user name to connect to the metric endpoint.
PasswordEnter the password to connect to the metric endpoint.
CA Cert file (pem)

The CA Cert file (pem) field specifies the path to the Certificate Authority (CA) certificate file, in PEM format, used to establish a trusted TLS connection between the Apache Kafka KM and the AWS Managed Streaming for Apache Kafka (MSK) Metric endpoint. The CA certificate is used to validate the identity of the AWS MSK metric endpoint when broker metrics are collected through Metric endpoints.

To configure secure access to metric endpoints, perform the following steps:

  1. Download the Apache Kafka Certificate Authority (CA) certificate provided by AWS Managed Streaming for Apache Kafka (MSK) and save it in PEM format. In the Apache Kafka KM configuration, specify the path to the CA certificate file for the metric endpoints option.

  2. Using the AWS MSK access key, access certificate, and the downloaded CA certificate, generate the following files:

    • client.truststore.jks (trust store file)
    • client.keystore.p12 (key store file)
  3. After generating the files, specify the path to the client.truststore.jks file in the metric endpoints configuration.​​​​​

Aiven Endpoint with port

Enter the Aiven endpoint port number. Ensure that Jolokia integration is enabled on Aiven to expose JMX metrics.

For more information, refer to Aiven Documentation: https://aiven.io/docs/platform/howto/integrations/access-jmx-metrics-jolokia.

Aiven User name

Enter the Jolokia endpoint username to connect to Aiven.

Aiven Password

Enter the Jolokia endpoint password to connect to Aiven.

CA Cert file (pem)

Download the Kafka CA certificate from Aiven and specify the path

For Aiven Kafka, generate the trust store file (client.truststore.jks) and key store file (client.keystore.p12) by using the access key, access certificate, and CA certificate, and then specify the path to the trust store file.

For more information, refer to Aiven Documentation: https://aiven.io/docs/products/kafka/howto/keystore-truststore.

SASL authentication

Expand the SASL authentication field to configure SASL authentication:

  • Use SASL—Select the checkbox to enable SASL authentication.
  • SASL authentication type—Select one of the following SASL authentication types:
    • SASL_PLAINTEXT—Use this option to authenticate with Apache Kafka by using a user name and password sent in plain text.
    • SASL SSL—Use this option to authenticate with Apache Kafka by using a user name and password. When you select SASL_SSL, you must provide the credentials and configure SSL Certificate Details to establish a secure, encrypted connection.
  • SASL User Name—Enter the user name to configure the SASL connection.
  • SASL Password—Enter the password for SASL authentication.
SSL Certificate Details

Expand the SSL Certificate Details field to configure SSL certificate details:

  • Use SSL—Select this checkbox to enable an SSL connection.
  • Trust store file—Enter the path of the trust store file. For example, for Windows, enter C:\\apachekafka\\jks\\truststore.jks and for Linux, enter /opt/apachekafka/jks/truststore.jks.
  • Trust store password—Enter the password for the trust store.
  • Key store file—Enter the path of the key store file. This field is optional for the client and can be used for two-way authentication for the client. For example, for Windows, enter C:\\apachekafka\\jks\\keystore.jks and for Linux /opt/apachekafka/jks/keystore.jks.
  • Key store password—Enter the password for the key store file. This field is optional for the client and is required only if the SSL ssl.keystore.location is configured.
  • Key password—Enter the password for the private key in the key store. This field is optional for the client.
Filtering Options

Expand the Filtering Options field to configure Kafka topics you want to include or exclude from the monitoring:

  • Filter Kafka Topics—Enter the Kafka topics you want to include or exclude from the monitoring. You can enter the exact Kafka topic name or a regular expression matching one or more topics. For example, Metric.*, Events.*|Device.*.
  • Kafka Topics Filter Type—Select the include or exclude option to filter the Kafka topics.
  • Filter Kafka Consumer Groups—Enter the Kafka consumer groups you want to include or exclude from the monitoring. You can enter the exact consumer group name or a regular expression matching one or more consumer groups. For example, Notif.*, Config.*|Deploy.*.
  • Kafka Consumer Groups Filter Type—Select the include or exclude option to filter the Kafka consumer groups.
KM Administration

Expand the KM Administration field to configure Java details:

  • Optional JVM args—Enter JVM arguments for the Java collector.
  • Logging—Select this option to enable PSL and Java debug.
Java home

Enter the path of the Java home directory.

For example, if your Java executable exists in the /usr/java/jdk1.8.0_45/jre/bin/java path, enter /usr/java/jdk1.8.0_45/jre as the value in this field.

  1. Click Save.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC PATROL for Apache Kafka 26.1