Collecting files from a host locally

To collect files locally from a host, you need to create the Monitor file on Collection Agent data collector.

Note

This data collector does not work for mapped drives.

The following information describes the process of creating this data collector.

The following video (4:09) illustrates the process of creating a data collector for collecting the itda.log file.

Note

The following video displays screens from an earlier version, however, the information provided in the video is still relevant to the current version of the product.

 https://youtu.be/vB7StE8H-gM

To collect files from a host locally

  1. Navigate to Administration > Data Collectors > Add Data Collector .
  2. In the Name box, provide a unique name to identify this data collector.

  3. From the Type list, select Monitor File on Collection Agent.

  4. Provide the following information, as appropriate:

    FieldDescription
    Target/Collection Host
    Collection Host (Agent)

    Type or select the collection host depending on whether you want to use the Collection Station or the Collection Agent to perform data collection.

    The collection host is the computer on which the Collection Station or the Collection Agent is located.

    By default, the Collection Station is already selected. You can either retain the default selection or select the Collection Agent.

    Note: For this type of data collector, the target host and collection host are expected to have the same values.

    Collector Inputs
    Directory Path

     Specify a directory path that is an absolute path of the log file.

    In the path, you can specify wildcards or system environment variables. Wildcards can be used to match a partial path or include subdirectories of a file.

    You can use the following wildcard characters:

    • Question mark (?)—Can be used to substitute exactly one character in the directory path.
    • Asterisk (*)—Can be used to substitute zero or more characters in the directory path.
    • Sequence of two asterisks (**)—Can be used to substitute a partial path or include subdirectories depending on where you place the wildcard in the path.

    To collect the following directories, you need to specify: /usr/**/*_log/

    • /usr/local/cpanel/login_log/
    • /usr/local/stats_log/

    To collect subdirectories of the directory path, /usr/local/, you need to specify: /usr/local/**/

    For more information, see Using wildcards in the directory path.

    Tip: (about specifying an environment variable) Keep in mind that after creating the environment variable on the Collection Host, you need to restart the Collection Agent (or Collection Station) to be used for creating the data collector. Without doing this, you cannot apply the environment variable and this might affect the auto-detect feature available for assigning a data pattern.

    Filename/Rollover Pattern

    Specify the file name only, or specify the file name with a rollover pattern to identify subsequent logs.

    You can use the following wildcard characters:

    • Asterisk (*)—Can be used to substitute zero or more characters in the file name.
    • Question mark (?)—Can be used to substitute exactly one character in the file name.

    Specifying a rollover pattern can be useful to monitor rolling log files where the log files are saved with the same name but differentiated with some variable like the time stamp or a number. Specifying a wildcard can also be useful when you remember the file name only partially.

    Note: Ensure that you specify a rollover pattern for identifying log files that follow the same data format (which means they will be indexed with the same data pattern).

    Scenario 1

    Suppose you want to collect log files saved with succeeding numbers once they reach a certain size; for example:

    IAS0.log

    IAS1.log

    IAS2.log

    Rollover pattern: In this scenario, you can specify the rollover pattern as IAS?.log.

    Scenario 2

    Suppose you want to collect log files that roll over every hour and are saved with the same date but a different time stamp in the YYYY-MM-DD-HH format; for example:

    2013-10-01-11 .log

    2013-10-01-12.log

    2013-10-01-13.log

    Rollover pattern: In this scenario, you can specify the rollover pattern as 2013-10-01-*.log or 2013-10-01-??.log.

    In this scenario, if you are sure that exactly two digits at the end of timestamp are likely to change, then you can specify the ?? wildcard sequence to capture exactly two changing digits. Otherwise, specifying a single asterisk is recommended.

    Time Zone

    (Optional) Accept the default Use file time zone option or select a time zone from the list.

    With the default option, data is indexed as per the time zone available in the data file. If the data file does not contain a timezone, then the time zone of the Collection Host (Collection Station or Collection Agent server) is used.

    Keep in mind that the selected timezone must match the timezone of the server from which you want to collect data. If you manually specify the timezone despite the file containing a timezone, then the manually specified timezone overrides the file timezone.

    The field Time Zone takes into account the changes due to Daylight Savings Time (DST) where ever applicable. 

    Data Pattern
    Pattern

    Assign the data pattern (and optionally date format) for indexing the data file.

    The data pattern and date format together decide the way in which the data will be indexed. When you select a data pattern, the matching date format is automatically selected. However, you can override the date format by manually selecting another date format or by selecting the option to create a new date format. By doing this, the date format is used to index the date and time string, while rest of the data is indexed as per the data pattern selected.

    Instead of manually browsing through the list of available data patterns, you can click Auto-Detect to automatically find a list of matching data patterns. If no matching data patterns are found, then a list of matching date formats is displayed. By selecting the date format, the date and time string (in the data) is indexed with the selected date format, while rest of the data is indexed as free text.

    If you cannot find both matching data patterns and date formats, then you can choose to index the data as free text. Depending on whether the data contains a date and time string, you can choose to assign the data pattern as Free Text with Timestamp or Free Text without Timestamp. All the records processed by using the Free Text without Timestamp option are assumed to be a single line of data with a line terminator at the end of the event. To distinguish records in a custom way, you can specify a custom string or regular expression in the Event Delimiter box, which decides where the new line starts in the data.

    If you are collecting JSON data, then depending on whether the data contains a date and time string, you can assign the data pattern as JSON with Timestamp or JSON without Timestamp.

    After assigning the data pattern (and optionally date format), you can preview the sample records.

    For more information, see Assigning the data pattern and date format to a data collector.

    Notes:

    • Before filtering the relevant data patterns by clicking Auto-Detect, ensure that the correct file encoding is set.
    • If you select both – a pattern and a date format, the product uses the date format to index the timestamp and the pattern to index rest of the event data.
    Date Format
    Date Locale

    (Optional) You can use this setting to enable reading the date and time string based on the language selected. Note that this setting only applies to those portions of the date and time string that consist letters (digits are not considered).

    By default, this value is set to English.

    You can manually select a language to override the default locale. For a list of languages supported, see Language information.

    File Encoding

    If your data file uses a character set encoding other than UTF-8 (default), then do one of the following:

    • Filter the relevant character set encodings that match the file.
      To do this, click Filter relevant charset encoding next to this field.
    • Manually scan through the list available and select an appropriate option.
    • Allow TrueSight IT Data Analytics to use a relevant character set encoding for your file by manually select the AUTO option.
    Poll Interval (mins)

    Enter a number to specify the poll interval (in minutes) for the log collection.

    By default, this value is set to 1.

    Start/Stop Collection(Optional) Select this check box if you want to start the data collection immediately.

    Ignore Data Matching Input

    (Optional) If you do not want to index certain lines in your data file, then you can ignore them by providing one of the following inputs:

    • Provide a line that consistently occurs in the event data that you want to ignore. This line will be used as the criterion to ignore data during indexing.
    • Provide a Java regular expression that will be used as the criterion for ignoring data matching the regular expression.

    Example: While using the following sample data, you can provide the following input to ignore particular lines.

    • To ignore the line containing the string, "WARN", you can specify WARN in this field.
    • To ignore lines containing the words both "WARN" and "INFO", you can specify a regular expression .*(WARN|INFO).* in this field.
    Sample data
    Sep 25, 2014 10:26:47 AM net.sf.ehcache.config.
    ConfigurationFactory parseConfiguration():134
    WARN: No configuration found. Configuring ehcache from 
    ehcache-failsafe.xml  found in the classpath:
    
    Sep 25, 2014 10:26:53 AM com.bmc.ola.metadataserver.
    MetadataServerHibernateImpl bootstrap():550
    INFO: Executing Query to check init property: select * 
    from CONFIGURATIONS where userName = 'admin' and 
    propertyName ='init'
    
    Sep 30, 2014 07:03:06 PM org.hibernate.engine.jdbc.spi.
    SqlExceptionHelper logExceptions():144
    ERROR: An SQLException was provoked by the following 
    failure: java.lang.InterruptedException
    
    Sep 30, 2014 04:39:27 PM com.bmc.ola.engine.query.
    ElasticSearchClient indexCleanupOperations():206
    INFO: IndexOptimizeTask: index: bw-2014-09-23-18-006 
    optimized of type: data

    Data Retention Period (in days)

    Indicates the number of days for which indexed data must be retained in the system.

    By default, this value is set to 7. The default value is based on the maximum data retention period specified at Administration > System Settings.

    You can change this limit to a maximum of 14 days. To increase the limit beyond 14 days, you need to modify the value of the following property:

    • Property name: max.data.collector.data.retention.limit
    • Property location: %BMC_ITDA_HOME%\custom\conf\server\searchserviceCustomConfig.properties

    After changing the property value, you need to restart the Search component to apply the change.

    For more information, see Understanding data retention and deletion.

    Best Effort Collection

    (Optional) If you clear this check box, only those lines that match the data pattern are indexed; all other data is ignored. To index the non-matching lines in your data file, keep this check box selected.

    Note: Non-matching lines in the data file are indexed on the basis of the Free Text with Timestamp data pattern.

    Example: The following lines provide sample data that you can index by using the Hadoop data pattern. In this scenario, if you select this check box, all lines are indexed. But if you clear the check box, only the first two lines are indexed.

    Sample data
    2014-08-08 15:15:43,777 INFO org.apache.hadoop.hdfs.server.
    datanode.DataNode.clienttrace: src: /10.20.35.35:35983, dest: 
    /10.20.35.30:50010, bytes: 991612, op: HDFS_WRITE, cliID:
    
    2014-08-08 15:15:44,053 INFO org.apache.hadoop.hdfs.server.
    datanode.DataNode: Receiving block blk_-6260132620401037548_
    683435 src: /10.20.35.35:35983 dest: /10.20.35.30:50010
    
    2014-08-08 15:15:49,992 IDFSClient_-19587029, offset: 0, 
    srvID: DS-731595843-10.20.35.30-50010-1344428145675, 
    blockid: blk_-8867275036873170670_683436, duration: 5972783
    
    2014-08-08 15:15:50,992 IDFSClient_-19587029, offset: 0, 
    srvID: DS-731595843-10.20.35.30-50010-1344428145675, 
    blockid: blk_-8867275036873170670_683436, duration: 5972783
    Log File Contains Header

    (Optional) Providing this value is mandatory only if you are trying collect a file that contains a constant header which must not be indexed.

    The value must be the actual header appearing in the data.

    Log File Contains Footer

    (Optional) Providing this value is mandatory only if you are trying collect a file that contains a constant footer which must not be indexed.

    The value must be the actual footer appearing in the data.

    Inherit Host Level Tags From Target Host(Optional) Select this check box to inherit your tag selections associated with the target host that you selected earlier. This option is not applicable if you did not select a target host. Note: After selecting this check box, you can further manually select additional user groups. When you manually select additional user groups, both the inherited permissions as well as the manually assigned permissions are applied. To remove the inherited permissions, clear this check box.
    Select Tag name and corresponding value

    (Optional) Select a tag name and specify the corresponding value by which you want to categorize the data collected. Later while searching data, you can use these tags to narrow down your search results.

    Example: If your are collecting data from hosts located at Houston, you can select a tag name for "Location" and in the value specify "Houston". While searching the data, you can use the tag, Location="Houston" to filter data and see results associated with the Houston location.

    To be able to see tag names, you need to first add them by navigating to Administration > System Settings.

    To specify tag names and corresponding values, in the left box select a tag name and then type the corresponding tag value in the right box. While you type the value, you might see type-ahead suggestions based on values specified in the past. If you want to use one of the suggestions, click the suggestion. Click Add to add the tag name and corresponding value to the list of added tags that follow. Click Remove Tag to remove a tag.

    The tags saved while creating the data collector are displayed on the Search tab, under the Filters panel, and in the Tags section.

    Note: At a time, you can specify only one value for a tag name. To specify multiple values for the same tag name, each time you need to select the tag name, specify the corresponding value, and click Add.

    For more information about tags, see Understanding tags.

    Inherit Host Level Access Groups From Target Host(Optional) Select this check box to inherit your group access configurations associated with the target host that you selected earlier. This option is not applicable if you did not select a target host.

    Note: After selecting this check box, you can further manually select additional user groups. When you manually select additional user groups, both the inherited permissions as well as the manually assigned permissions are applied. To remove the inherited permissions, clear this check box.
    Select All Groups

    (Optional) Select this option if you want to select all user groups. You can also manually select multiple user groups.

    Notes: You can access data retrieved by this data collector based on the following conditions.

    • If user groups are not selected and data access control is enabled: Only the creator of the data collector can access data retrieved by this data collector.
    • If user groups are not selected and if data access control is not enabled: All users can access data retrieved by this data collector. You can restrict access permissions by selecting the relevant user groups that must be given access permissions. To enable data access control, navigate to Administration > System Settings.

    For more information, see Managing user groups in IT Data Analytics.

  5. Click Create to save your changes.

Using wildcards in the directory path

A wildcard is a character that can be used to substitute one or more characters while selecting files for monitoring.

Using wildcards in the directory path can be useful in the following scenarios:

  • When you want to collect specific logs from different locations on the same server.
  • When you want to collect logs from the subdirectories of the specified directory.

Tip

Directory paths of Linux systems are case sensitive.

The following table lists the wildcards that you can use while specifying directory paths:

WildcardCan be used to...Examples
*

Substitute zero or more characters in the directory path.

/app/subapp*/log/access_log/ matches the following paths:

  • /app/subapp101/log/access_log/
  • /app/subapp201Common/log/access_log/
?Substitute exactly one character in the directory path.

/app/subapp?/log/access_log/ matches the following paths:

  • /app/subapp1/log/access_log/
  • /app/subapp2/log/access_log/

/app/subapp??/log/ matches the following paths:

  • /app/subapp11/log/
  • /app/subapp12/log/
**

Match a partial path or include subdirectories of the directory path depending on where you place the wildcard in the path.

To collect data from subdirectories, you need to specify the ** wildcard sequence at the end of the directory path.

Note: This wildcard searches through directories and subdirectories at a maximum of five levels to find matches. 

Best practice: If you use this wildcard in place of extremely deep level of directories then it can negatively impact performance. Therefore, it is recommended that you use this wildcard in appropriate places.

For example, suppose you want to collect the itda.log. To do this, you can specify the following inputs:

  • (Recommended) C:/Program files/bmcsoftware/**/
    In this case, directories need to be searched specifically after C:/Program files/bmcsoftware/.
  • (Not recommended) C:/**/
    In this case, a long list of directories need to be searched after C:/.

When you specify the wildcard towards the beginning of the directory path, the search for directories happens at a deeper level and doing this can negatively impact performance. Conversely, when you specify the wildcard towards the end of the directory path, the search for directories happens on a limited set and doing this can improve performance. Thus, in this scenario, specifying C:/Program files/bmcsoftware/**/ is better than specifying C:/**/.

Note: If you are using a Collection Agent earlier than version 2.5, then you can only specify this wildcard at the end of the directory path to include subdirectories.

For example, you can specify /usr/local/**/ to collect the following logs:

  • /usr/local/stats_log/
  • /usr/local/cpanel/logs/login_log/
  • /usr/local/mailman/log

/usr/**/*_log matches the following paths:

  • /usr/local/cpanel/login_log/
  • /usr/local/cpanel/error_log/
  • /usr/local/stats_log/
Was this page helpful? Yes No Submitting... Thank you

Comments