Deriving insights from logs

Get to the root cause of an issue by using out-of-the-box options such as queries, time range, and fields. You can use BMC Helix Log Analytics to get to the root cause of an issue to ensure system uptime. Take advantage of the out-of-the-box options such as queries, time range, and fields to be able to quickly find information about an issue in the log details.

BMC Helix Log Analytics uses the OpenSearch platform for processing and analyzing logs. For more information, see  the OpenSearch documentation Open link .

The following video (2:53) illustrates how to analyze and visualize logs:


https://youtu.be/fggAxALVs0w

Search and analyze logs on the Explorer > Discover tab. The following figure highlights the options on the Discover page that help you in getting to the root cause of an issue:


Example

You observed multiple log entries with 501 status in your Single Sign On application logs. It means that the service is unavailable and you want to keep it available all the time. Use the search options and narrow down the results to find the root cause. For quick reference, add the filtered logs to dashboards.

Permission to view logs

Users can view the log records according to their permissions. As an administrator, use user groups while creating or editing collection policies to restrict access to log records.

The following scenario describes the log collection behavior if you change the user group association in collection policies.

Scenario: What happens if associated user groups are changed in collection policies?

Sarah is an administrator in Apex Global, which uses BMC Helix Log Analytics to collect and analyze logs. Sarah has created the Operators and Administrators user groups to implement role-based access in the system. She has created the following collection policies to collect log data:

  • Common collection policy: The Operators and Administrators groups are associated with this policy. The data collected from this policy is visible to users belonging to both groups.
  • Restricted collection policy: The Administrators group is associated with this policy. The data collected from this policy is visible only to the Administrators group.

Sarah decides to remove the association of the Operators group from the Common collection policy. In this scenario, these actions happen:

  • At the time of the next data ingestion after the group association change, the Operators group can no longer see the data collected by the Common collection policy. However, they can still see the data collected up to the data ingestion.
  • The Administrators group continues seeing the data. There is no change in behavior for the Administrators group.

For more information on collection policies, see Creating collection policies.

Anomalous log severity

In case of anomalous logs, the log records also display the log severity of the log anomaly. 

BMC Helix Log Analytics automatically assigns severity to anomalous log messages depending on the keywords that the message contains.

For example, if a message contains the words error or critical, the anomaly is assigned the High severity.

The following table displays the severity level and its associated keywords:

SeverityAssociated keywords
High

error, critical, crash, failure, fatal, exception, outage, deadlock, segfault, corrupt, unreachable

Mediumwarn, warning, timeout, degraded, inconsistency, stale, inefficiency, bottleneck, throttle
LowAll anomalous log messages that do not contain the keywords listed for High and Medium severities are granted Low severity.
For more information on anomalous logs, see Detecting anomalies from logs.


Index pattern overview

You use an index pattern to select the data to use and define the properties of the fields while collecting logs. In BMC Helix Log Analytics, by default, an index pattern is created for you. All the logs are collected under this index pattern. You can neither delete this index pattern nor create a new one.

After archiving is enabled, a new index pattern is added to the Discover page in the format logarc_*.  All the logs collected since the time archiving is enabled for your tenant are shown in the new index pattern. The data before archiving is enabled continues to show in the earlier index pattern. Archived and restored data are available in the new index pattern only. Therefore, to analyze logs collected after archiving is enabled, use the logarc_* index pattern.

After an anomaly or rare pattern is detected in logs, it is reported in a new index pattern whose format is logml-*.

For more information, see Detecting anomalies from logs.


Options to search for a specific information

Use the following options to search for a specific alphanumeric string:

  • Search field: Enter the string that you are looking for in a field. The format is: field_name:"search string". For example, to search for all logs where status 501 is reported, enter status:501.
  • Filter: Click Add Filter and select a field. Operators are available as per the data type of the selected field. Enter the string and save the filter. For example, loglevel.keyword is error.


Important

  • To obtain the search results, search with the complete keywords. If you search with partial keywords, search results are not displayed. For example, if you are searching for the Apache Logs bmc_tag, add the following search criteria: bmc_tag contains Apache Logs.
  • The Search field is case-sensitive. To obtain the search results, ensure that you search with the right case.

Sample logs:

Thread-MainThread - Starting log processing service...
Thread-MainThread - Initializing tenant configurations
Thread-MainThread - Fetch properties for tenant=bmc
Thread-MainThread - Initialized tenant configurations for tenant=bmc
Thread-MainThread - Fetch properties for tenant=hp
Thread-MainThread - Initialized tenant configurations for tenant=hp
Thread-MainThread - Consumed CPU utilization exceeded threshold.
Thread-MainThread - Tenant configurations Initialization done.
Thread-KafkaConsumer - Starting kafka consumer alert_kakfa_consumer for topic=alert
Thread-KafkaConsumer - Initializing kafka consumer...
Thread-KafkaConsumer - Kafka consumer alert_kakfa_consumer started.
Thread-MainThread - [ProcessStart=The process of log processing service started.
Thread-MainThread - [ProcessStart=The job process of log processing service has started.
Thread-MainThread - [ProcessStart=The job process of log processing service has finished.
Thread-MainThread - [ProcessStart=The job process of log processing service has terminated.

QueryDescriptionSearch result
message:Thread-MainThreadAll records where the message field contains "Thread-MainThread".

Thread-MainThread - Starting log processing service...
Thread-MainThread - Initializing tenant configurations
Thread-MainThread - Fetch properties for tenant=bmc
Thread-MainThread - Initialized tenant configurations for tenant=bmc
Thread-MainThread - Fetch properties for tenant=hp
Thread-MainThread - Consumed CPU utilization exceeded threshold.
Thread-MainThread - Initialized tenant configurations for tenant=hp
Thread-MainThread - Tenant configurations Initialization done.
Thread-MainThread - [ProcessStart=The process of log processing service started.
Thread-MainThread - [ProcessStart=The job process of log processing service has started.
Thread-MainThread - [ProcessStart=The job process of log processing service has finished.
Thread-MainThread - [ProcessStart=The job process of log processing service has terminated.

message:ThreadAll records where the message field contains "Thread".

Thread-MainThread - Starting log processing service...
Thread-MainThread - Initializing tenant configurations
Thread-MainThread - Fetch properties for tenant=bmc
Thread-MainThread - Initialized tenant configurations for tenant=bmc
Thread-MainThread - Fetch properties for tenant=hp
Thread-MainThread - Initialized tenant configurations for tenant=hp
Thread-MainThread - Consumed CPU utilization exceeded threshold.
Thread-MainThread - Tenant configurations Initialization done.
Thread-KafkaConsumer - Starting kafka consumer alert_kakfa_consumer for topic=alert
Thread-KafkaConsumer - Initializing kafka consumer...
Thread-KafkaConsumer - Kafka consumer alert_kakfa_consumer started.
Thread-MainThread - [ProcessStart=The process of log processing service started.
Thread-MainThread - [ProcessStart=The job process of log processing service has started.
Thread-MainThread - [ProcessStart=The job process of log processing service has finished.
Thread-MainThread - [ProcessStart=The job process of log processing service has terminated.

message:Thread*All records where the message field contains the word Thread followed by other characters (excluding space).

Thread-MainThread - Starting log processing service...
Thread-MainThread - Initializing tenant configurations
Thread-MainThread - Fetch properties for tenant=bmc
Thread-MainThread - Initialized tenant configurations for tenant=bmc
Thread-MainThread - Fetch properties for tenant=hp
Thread-MainThread - Initialized tenant configurations for tenant=hp
Thread-MainThread - Consumed CPU utilization exceeded threshold.
Thread-MainThread - Tenant configurations Initialization done.
Thread-KafkaConsumer - Starting kafka consumer alert_kakfa_consumer for topic=alert
Thread-KafkaConsumer - Initializing kafka consumer...
Thread-KafkaConsumer - Kafka consumer alert_kakfa_consumer started.
Thread-MainThread - [ProcessStart=The process of log processing service started.
Thread-MainThread - [ProcessStart=The job process of log processing service has started.
Thread-MainThread - [ProcessStart=The job process of log processing service has finished.
Thread-MainThread - [ProcessStart=The job process of log processing service has terminated.

message:Starting kafka consumerAll records where the message field contains any of the following words: Starting, Kafka, or consumer. Here, space is treated as the OR operator in the search.

Thread-MainThread - Starting log processing service.

Thread-KafkaConsumer - Starting kafka consumer alert_kakfa_consumer for topic=alert.

Thread-KafkaConsumer - Initializing kafka consumer...

Thread-KafkaConsumer - Kafka consumer alert_kakfa_consumer started.

message:Starting Kafka con*All records where the message field contains any of the following: Starting, Kafka, or a word that starts with con.

Thread-MainThread - Starting log processing service.

Thread-KafkaConsumer - Starting kafka consumer alert_kakfa_consumer for topic=alert.

Thread-KafkaConsumer - Initializing kafka consumer...

Thread-KafkaConsumer - Kafka consumer alert_kakfa_consumer started.

Thread-MainThread - Consumed CPU utilization exceeded threshold.

message:"Starting kafka consumer"All records where the message field contains the complete string "Starting Kafka consumer".

Thread-KafkaConsumer - Starting kafka consumer alert_kakfa_consumer for topic=alert.

message:”Starting kafka con**”

All records where the message field contains the complete string "Starting kafka con*". Here, the * character is not treated as a wildcard character.

No results.

message:*ProcessStart=The AND message:process AND message:started*

All records where the message field contains all the following strings in a log message:

  • *ProcessStart=The
  • process
  • started*

Here, you are filtering log messages that contain all the strings that you mention in the query by using the AND operator.  

Thread-MainThread - [ProcessStart=The process of log processing service started.
Thread-MainThread - [ProcessStart=The job process of log processing service has started.


Options to filter search results by time range and date

You get the following options to set the date to narrow down your search results:

  • Specify days or hours for which you want to search results. For example, search results for last 15 minutes or last 7 days.
  • Set specific date and time (absolute or specific). For example, search results for Jul 18, 2022 18:00 hours till Jul 19, 2022 18:00 hours.

Supported time formats

The log generation time is saved in the @timestamp field. Time of the collected logs must be in the ISO 8601 ZULU format; for example, 2022-02-20T12:21:32.756Z. If the log generation time is specified in any other format, it is saved in the @@timestamp field, and the log collection time is saved in the @timestamp field. The log collection time is available in the Greenwich Mean Time (GMT) time zone. 

If you are collecting logs by using an external agent like Logstash and Filebeat, the Epoch time format is supported. However, if you are collecting logs by using the Docker, Windows, or Linux connector, the Epoch time format is not supported. 


Fields available to filter logs

The fields identified in the logs are displayed in the Available fields section. Click a field and select a value to filter logs based on the field. For example, click the ipAddress field and select an IP address to search for all logs where ipAddress is the selected value. To add a field as a column in the search result, click the + symbol that is shown when you move your mouse over the field name.  

Tip

In place of the data type icon of a field, if you see the '?' sign, refresh the index on the index pattern page (Stack Management > Index pattern > index pattern name). Or, wait for around 5 minutes when the index is refreshed automatically and then refresh your browser.

After using these options that help you analyze logs, you see the filtered records that require immediate attention:


To save the search

Save the search query that you have created by using the search field, available fields, and time period fields. In future, access the saved search to get similar results.

  1. Click Save.
  2. Enter a name.
  3. To access the saved search, click Open.


To add the saved search to a visualization

  1. Select Visualize > Create new visualization.
  2. Select the type of visualization that you want to use.
    For example, a line chart.
  3. Select the search that you have saved.
  4. Apply additional filters to the data and save the visualization.
  5. To add the visualization to a dashboard:
    1. Click Dashboard.
    2. Create a new dashboard or edit an existing one.
    3. Click Add and select the visualization.



Learn more

Read the following blog to learn how logs help you in understanding the health of your environment, identify issues, and track their root cause: Observability with logs to accelerate MTTR Open link . 

Was this page helpful? Yes No Submitting... Thank you

Comments