Collecting logs by using the command line interface
The following video (3:32) illustrates the configurations required to collect logs.
Before you begin
To prepare to configure log collection
- Copy the API key of your BMC Helix Operations Management tenant and paste it in a text file. In BMC Helix Operations Management, go to Administration > Repository and click Copy API key.
Download and install Beats on the computers from where you want to collect logs.
To configure Logstash
For detailed information about the files used in the configurations, see Logstash documentation.
- Configure Logstash to accept data from Beats
- From the Logstash installation folder, open the config\logstash-sample.conf file.
If you are configuring Logstash by using RPM on Linux operating systems, copy the /etc/logstash/logstash-sample.conf file to the /etc/logstash/conf.d folder and then open it. In the input plugin, enter the port number using which Beats send data to Logstash.
input{
beats{
port=><port number (for example 5044)>
}
}
- From the Logstash installation folder, open the config\logstash-sample.conf file.
Configure Logstash to send the collected logs to the REST endpoint by entering the following details to the output plugin in the config\logstash-sample.conf file.
In Linux environments, after updating the logstash-sample.conf file, move it to the /etc/logstash/conf.d folder.output{
http
{
url=>"https://<Tenant URL provided by BMC>/log-service/api/v1.0/logs"
http_method=>"post"
content_type=>"application/json"
format=>"json_batch"
retry_failed=>false
http_compression=>true
headers => {
"Content-Type" => "application/json"
"Authorization" => "apiKey <API key of tenant>"}
}
}(Optional) Add a structure to the logs - field:value pattern by using the grok plugin in the config\logstash-sample.conf file.
(Optional) If you want convert the time zone of the collected logs, use the date filter.
Start Logstash by running the following command:
- bin/logstashFor example for Windows, the command would be:
- bin/logstash -f config/logstash-sample.conf
To configure Beats
- Configure Beats to communicate with Logstash by updating the filebeat.yml and winlogbeat.yml files, available in the installed Beats installation folder. Mark the output.elasticsearch plugin as a comment and uncomment the output.logstash plugin.
- To send data to Logstash as, add the Logstash communication port:
output.logstash:
hosts: <logstash computer>:<port to communicate with logstash> - In the type plugin, change the value of enabled to true.
- Configure log sources by adding the path to the filebeat.yml and winlogbeat.yml files and start Beats.
type: log
enabled: true
paths:
- <path of log source. For example, C:\Program Files\Apache\Logs or /var/log/message> To ensure that you collect meaningful logs only, use include.
To reduce the size of the collected logs
While collecting logs, Logstash and Filebeat add metadata to the logs that are saved with the collected logs. Thus, the size of the collected logs increases. Here is an example of an application logs collected by using Logstash and Filebeat.
In this example, metadata added by Logstash is:"event.original": "Aug 10, 2022 11:37:14 AM com.bmc.ola.collection.collector.BaseCollector execute():410 \nINFO: Collector=ITDA Collection Metrics_Collection Metrics_<hostname>.bmc.com, CollectionPollId=48, 2 events read and sent for indexing.",
The remaining metadata is added by Filebeat. This metadata increases the size of the collected logs and your storage space is utilized more than necessary.
Perform the following steps to reduce the size of the collected logs:
- In the filebeat.yml file, add # before the metadata fields that you want to prevent from being added to the logs.
To remove the metadata from Logstash, in the pipeline.xxx file, enter the following:
mutate
{ remove_field => ["[event][original]"] }
To verify log collection
Navigate to BMC Helix Log Analytics > Discover.
To troubleshoot log collection
The following table lists the possible scenarios that you might run into while collecting logs and the steps that you can perform to troubleshoot the issue:
Scenario | Possible actions |
---|---|
No data in the Discover tab | This issue can occur because of any of the following reasons:
|
UI taking time to load | There is a known issue in Kibana that sometimes the UI takes few seconds to load. |
Error code 422 | If you get error code 422 in the Logstash logs, you have exceeded the daily limit to ingest data. For information, see BMC Helix Subscriber Information. |
Unable to view ingested data and the field type is not shown in the Discover tab: | Go to Stack Management > Index Pattern > default index pattern > Refresh ( |