Collecting logs by using the command line interface


Collect logs from various sources and search them to find relevant information. You can also apply a structure to your unstructured log to make them easier to analyze.

While  provides log collection policies and connectors to collect logs, you can also ingest logs by using other open-source connectors.

Use the command line interface (CLI) to configure these open-source connectors.  provides Rest APIs to ingest logs. For more information about the REST APIs, see Log-management-endpoints-in-the-REST-API.

You can also use open source collectors—Filebeat and Logstash—to collect logs. However, you need to manage all the connectors as it is not supported in the  connector framework.

The following video (3:32) illustrates the configurations required to collect logs.

icon-play@2x.pnghttps://youtu.be/5dXTTZcvth0

Before you begin

To prepare to configure log collection, perform the following steps:

  • Generate the API key from BMC Helix Portal or BMC Helix Operations Management by performing one of the following steps:
    • Open BMC Helix Portal and do the following:
      1. Go to User access > Users and keys.
      2. In the Access keys tab, click the Actions menu of a key and select Key details.
      3. On the Key details page, click Copy key details.
      4. Paste the key in a text file.
    • Open BMC Helix Operations Management and do the following:
      1. Go to Administration > Repository.
      2. On the Repository page, click Copy API key.
      3. Paste the API key in a text file.
  • Download and install Beats on the computers from where you want to collect logs.

To configure Logstash

For detailed information about the files used in the configurations, see Logstash documentation.

  1. Configure Logstash to accept data from Beats
    1. From the Logstash installation folder, open the config\logstash-sample.conf file.
      If you are configuring Logstash by using RPM on Linux operating systems, copy the /etc/logstash/logstash-sample.conf file to the /etc/logstash/conf.d folder and then open it.
    2. In the input plugin, enter the port number using which Beats send data to Logstash.

      input{
         beats{
              port=><port number (for example 5044)>
                  }
               }

      Important

      Ensure that the port is open on the computer where Logstash is installed.

  2. Configure Logstash to send the collected logs to the REST endpoint by entering the following details to the output plugin in the config\logstash-sample.conf file.
    In Linux environments, after updating the logstash-sample.conf file, move it to the /etc/logstash/conf.d folder.

    Important
    Use the following format for the API key that is generated from Helix Portal or BMC Helix Operations Management for successful authentication.
    Tenant ID::Access Key::Secret key
    1654181562::36X49Z3E0XPVWXKYUIMJ94HKN7XSKW::8uhbxcEScNKYz1eYhlrjZgcudlwSLJSYFdqZLwjglu9oVj6cQM

    output{
    http
    {
    url=>"https://<Tenant URL provided by BMC>/log-service/api/v1.0/logs"
    http_method=>"post"
    content_type=>"application/json"
    format=>"json_batch"
    retry_failed=>false
    http_compression=>true
    headers => {
    "Content-Type" => "application/json"
    "Authorization" => "apiKey <API key of tenant>"}
    }
    }
  3. (Optional) Add a structure to the logs - field:value pattern by using the grok plugin in the config\logstash-sample.conf file.
     

    Example...
    input {
        file {
            type => "apachelog"
            path => ["C:/logs/apachelogs.log"]
            start_position => "beginning"
             }
             }
     filter {
         if [type] == "apachelog" {
            grok { match => {"message" => '%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)'}
             }
          }
        }
  4. (Optional) If you want convert the time zone of the collected logs, use the date filter.

    Example
    date {
               match => ["tmpstamp", "MMM dd, yyyy hh:mm:ss a"]
               target => "@timestamp"
               locale => "en_US"
               timezone => "UTC"
          }
  5. Start Logstash by running the following command:

    - bin/logstash

    For example for Windows, the command would be:

    - bin/logstash -f config/logstash-sample.conf

    Important

    If you have enabled a firewall in your environment, open the outbound HTTPS port 443.

To configure Beats

  1. Configure Beats to communicate with Logstash by updating the filebeat.yml and winlogbeat.yml files, available in the installed Beats installation folder. Mark the output.elasticsearch plugin as a comment and uncomment the output.logstash plugin.
  2. To send data to Logstash as, add the Logstash communication port:
    output.logstash:
    hosts: <logstash computer>:<port to communicate with logstash>
  3. In the type plugin, change the value of enabled to true.
  4. Configure log sources by adding the path to the filebeat.yml and winlogbeat.yml files and start Beats.
    type: log
    enabled: true
    paths:
        - <path of log source. For example, C:\Program Files\Apache\Logs or /var/log/message>
  5. To ensure that you collect meaningful logs only, use include.

    Note

    The following settings in the .yml files will be ineffective:

    • Elasticsearch template setting
    • Dashboards (these settings are for Kibana Dashboards)
    • Kibana (these settings are for dashboards loaded via the Kibana API)

To reduce the size of the collected logs

While collecting logs, Logstash and Filebeat add metadata to the logs that are saved with the collected logs. Thus, the size of the collected logs increases. Here is an example of an application logs collected by using Logstash and Filebeat.

Example
Actual log:  "message": "Aug 10, 2022 11:37:14 AM com.bmc.ola.collection.collector.BaseCollector execute():410 \nINFO: Collector=<application name>Metrics_Collection Metrics_<hostname>.bmc.com, CollectionPollId=48,  2 events read and sent for indexing.",

Collected logs with metadata:

[{
  "@timestamp": "2022-11-21T09:40:23.183Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.7.0"
  },
  "log": {
    "file": {
      "path": "C:\\Program Files\\BMC Software\\<application>\\<foldername>\\station\\collection\\logs\\collection_7.log"
    },
    "flags": [
      "multiline"
    ],
    "offset": 64668
  },
  "message": "Aug 10, 2022 11:37:14 AM com.bmc.ola.collection.collector.BaseCollector execute():410 \nINFO: Collector=ITDA Collection Metrics_Collection Metrics_<hostname>.bmc.com, CollectionPollId=48,  2 events read and sent for indexing.",
  "event.original": "Aug 10, 2022 11:37:14 AM com.bmc.ola.collection.collector.BaseCollector execute():410 \nINFO: Collector=ITDA Collection Metrics_Collection Metrics_<hostname>.bmc.com, CollectionPollId=48,  2 events read and sent for indexing.",
  
  "input": {
    "type": "log"
  },
  "ecs": {
    "version": "1.5.0"
  },
  "host": {
    "mac": [
      "00:50:56:8f:99:5f",
      "00:50:56:8f:cb:11"
    ],
    "hostname": "xxx-xxx-xxx1xx",
    "name": "xxx-xxx-xxx1xx",
    "architecture": "x86_64",
    "os": {
      "platform": "windows",
      "version": "10.0",
      "family": "windows",
      "name": "Windows Server 2019 Standard",
      "kernel": "10.0.17763.3406 (WinBuild.160101.0800)",
      "build": "17763.3406"
    },
    "id": "0000a0a0-0000-00aa-0000-0a0aa0aa00a0",
    "ip": [
      "10.111.11.111",
      "10.000.00.000"
    ]
  },
  "agent": {
    "type": "filebeat",
    "ephemeral_id": "0a000000-a0a0-0a00-000a-aa0a000a000a",
    "hostname": "xxx-xxx-xxx0xx",
    "id": "0000a0aa-a0aa-0a00-0000-0a0000000000",
    "version": "7.7.0"
  }
}]

In this example, metadata added by Logstash is:"event.original": "Aug 10, 2022 11:37:14 AM com.bmc.ola.collection.collector.BaseCollector execute():410 \nINFO: Collector=ITDA Collection Metrics_Collection Metrics_<hostname>.bmc.com, CollectionPollId=48,  2 events read and sent for indexing.",

The remaining metadata is added by Filebeat. This metadata increases the size of the collected logs and your storage space is utilized more than necessary.

Perform the following steps to reduce the size of the collected logs:

  1. In the filebeat.yml file, add # before the metadata fields that you want to prevent from being added to the logs.
  2. To remove the metadata from Logstash, in the pipeline.xxx file, enter the following:

    mutate
         
    { remove_field => ["[event][original]"] } 

To verify the log collection

Navigate to  > Discover.

Important

A default index pattern is created. You cannot create a new index pattern or delete the existing one.

Troubleshooting log collection

The following table lists the possible scenarios that you might run into while collecting logs and the steps that you can perform to troubleshoot the issue:

Scenario

Possible actions

No data in the Discover tab

This issue can occur because of any of the following reasons:

  • Logstash has not received data.
    To verify if Logstash has received data, add the following to the output plugin to the Logstash logstash-sample.conf file:
    stdout { codec => json }
    If no data is received, check Beats configuration and network issues of the Logstash computer.
  • Logstash is unable to connect to 

    . To verify, check if the following error is present in the Logstash logs - Could not fetch URL.
    To resolve this issue, in the output plugin of the logstash-sample.conf file, verify the following:

    • The API key is correct.
    • The URL to connect to 

       is correct.

UI taking time to load

There is a known issue in Kibana that sometimes the UI takes few seconds to load.

Error code 422

If you get error code 422 in the Logstash logs, you have exceeded the daily limit to ingest data.

For information, see BMC Helix Subscriber Information.

Unable to view ingested data and the field type is not shown in the Discover tab:
Data_type_not_showing.png

Go to Stack Management > Index Pattern > default index pattern > Refresh (Refresh_Icon.png).

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*