Collecting logs by using Logstash and Filebeat
Collect logs from various sources and search them to find relevant information. You can also apply a structure to your unstructured log to make them easier to analyze.
Filebeats and Winlogbeat are the supported Beats. BMC Helix Log Analytics provides a UI to search the logs. You can save your searches and view them in dashboards. Dashboards of all users in your organization are available to you.
The following figure shows how logs are collected and made available to you for analyzing.
The following video (3:32) illustrates the configurations required to collect logs.
Before you begin
To prepare to configure log collection
- Copy the API key of your BMC Helix Operations Management tenant and paste it in a text file. In BMC Helix Operations Management, go to Administration > Repository and click Copy API key.
- Download and install Beats on the computers from where you want to collect logs.
- Download and install Logstash
To configure Logstash
For detailed information about the files used in the configurations, see Logstash documentation .
- Configure Logstash to accept data from Beats
- From the Logstash installation folder, open the
config\logstash-sample.conf
file.
If you are configuring Logstash by using RPM on Linux operating systems, copy the/etc/logstash/logstash-sample.conf
file to the/etc/logstash/conf.d
folder and then open it. In the
input
plugin, enter the port number using which Beats send data to Logstash.input{
beats{
port=><port number (for example 5044)>
}
}
Note: Ensure that the port is open on the computer where Logstash is installed.
- From the Logstash installation folder, open the
Configure Logstash to send the collected logs to the REST endpoint by entering the following details to the
output
plugin in theconfig\logstash-sample.conf
file.
In Linux environments, after updating thelogstash-sample.conf
file, move it to the/etc/logstash/conf.d
folder.output{
http
{
url=>"https://<Tenant URL provided by BMC>/log-service/api/v1.0/logs"
http_method=>"post"
content_type=>"application/json"
format=>"json_batch"
retry_failed=>false
http_compression=>true
headers => {
"Content-Type" => "application/json"
"Authorization" => "apiKey <API key of tenant>"}
}
}
(Optional) Add a structure to the logs - field:value pattern by using the grok plugin in the
config\logstash-sample.conf
file.(Optional) If you want convert the time zone of the collected logs, use the date filter.
date {
match => ["tmpstamp", "MMM dd, yyyy hh:mm:ss a"]
target => "@timestamp"
locale => "en_US"
timezone => "UTC"
}
Start Logstash by running the following command -
bin/logstash
For example for Windows -bin/logstash -f config/logstash-sample.conf
.
Note: If you have enabled firewall in your environment, open the outbound https port 443.
To configure Beats
Configure Beats to communicate with Logstash by updating the
filebeat.yml
andwinlogbeat.yml
files, available in the installed Beats installation folder. Mark theoutput.elasticsearch
plugin as a comment and uncomment theoutput.logstash
plugin.To send data to Logstash as, add the Logstash communication port:
output.logstash:
hosts: <logstash computer>:<port to communicate with logstash>
In the
type
plugin, change the value of enabled to true.Configure log sources by adding the path to the
filebeat.yml
andwinlogbeat.yml
files and start Beats.type: log
enabled: true
paths:
- <path of log source. For example, C:\Program Files\Apache\Logs or /var/log/message>
To ensure that you collect meaningful logs only, use include.
Note
The following settings in the .yml files will be ineffective:
- Elasticsearch template setting
- Dashboards (these settings are for Kibana Dashboards)
- Kibana (these settings are for dashboards loaded via the Kibana API)
To verify log collection
Navigate to BMC Helix Log Analytics > Discover.
Note: A default index pattern is created. You cannot create a new index pattern or delete the existing one.
To troubleshoot log collection
The following table lists the possible scenarios that you might run into while collecting logs and the steps that you can perform to troubleshoot the issue:
Scenario | Possible actions |
---|---|
No data in the Discover tab | This issue can occur because of any of the following reasons:
|
UI taking time to load | There is a known issue in Kibana that sometimes the UI takes few seconds to load. |
Error code 422 | If you get error code 422 in the Logstash logs, you have exceeded the daily limit to ingest data. For information, see BMC Helix Subscriber Information . |
Unable to view ingested data and the field type is not shown in the Discover tab: | Go to Stack Management > Index Pattern > default index pattern > Refresh (). |
Comments
In this point, can we please mention to have 'TenantURL' instead of 'DNS' provided since it is more easier to understand.
Current: url=>"https:///log-service/api/v1.0/logs"
Should be changed to: url=>"https:///log-service/api/v1.0/logs"
and if an example can be added, that would be great!
Thanks.
Done.
Thanks and regards,
Swati
Log in or register to comment.