Contents of the logs
As each BMC Discovery component and script runs, it outputs logging information. Logs are all stored in
/usr/tideway/log. The files can be accessed directly from the appliance command line or in the log viewer in the user interface. Logs can be downloaded from the appliance through the Support Services administration page.
Logs are critical to understanding problems that might be encountered in the system. This section describes the most important log files, and how to understand the information contained in them.
Viewing log files
To view logs:
- From the main menu, click the Administration icon. The Administration page displays. From the Appliance section, click Logs.
- Select the Log files tab (by default, the Log files tab is displayed).
- A list of all the available logs is displayed on the Appliance Logs page.
- Click the Watch link next to any log to view the log in detail.
The log level associated with each message is displayed. For information on log levels see Changing log levels at runtime.
If the Watch link is not available, the log is not present.
- To delete a log, click the Delete link next to a log.
- To download a log, click the Download link next to a log. You can then choose to open or save the text document for that log.
- Click the Compress link next to a log for larger files that you want to download. The log is given a .gz extension and is described as Compressed in the Type column. The Watch link is also removed for that log.
You cannot uncompress a log once it has been compressed.
You can also delete old logs by clicking the Delete Old Logs button on the main log display page. An old log is one to which writing has been completed and there is a replacement currently being written.
Log messages are classified into the following levels:
DEBUG— fine-grained informational events that are most useful to debug the application.
INFO— informational messages that highlight the progress of the application at coarse-grained level.
WARNING— potentially harmful situations.
ERROR— error events that might still allow the application to continue running.
CRITICAL— severe error events that might cause the application to abort.
USEFUL— start-up messages.
Unless asked to do otherwise by Customer Support, it is recommended that you configure each service to log at INFO level.
Most important logs
Each subsystem and script writes its own log file, with a name derived from the service or script. The ones that most often contain relevant information are:
tw_svc_discovery.log contains log messages from the Discovery subsystem, which is responsible for connecting to discovery targets and retrieving information from them. At INFO level, the log contains information about each discovery method that is executed; at DEBUG level, it contains a great deal of detail about how the discovery methods attempt different access mechanisms.
tw_svc_integrations_sql.log contains the log messages related to database scans.
The Reasoning subsystem coordinates discovery and executes patterns, maintaining the data model. On multi-cpu machines, the Reasoning subsystem is split into multiple "ECA Engine" processes; the log files are split correspondingly, so the files are suffixed _nn
.log where nn is the engine number, such as
Reasoning files are split as follows:
tw_svc_reasoning.log contains information from the top-level Reasoning control process. It is periodically updated with information about the states of the active scanning ranges. In case of pattern activation error, this file will help you to find the root cause.
.log contain information about scanning activities, including when each IP address starts and completes scanning. Each separate ECA engine process has its own log. On single CPU machines, the single ECA engine is part of the Reasoning control process, so this log output is in
tw_svc_reasoning.log rather than a specific ECA engine log.
.log contain log messages from TPL patterns. If a pattern is misbehaving, its output will be here.
.log contain debug information about the event trackers in the ECA engines. They only contain useful output when the Reasoning subsystem is set to log at DEBUG level. The information is not intended for human consumption, but for processing by diagnostic tools by Customer Support.
The "model" subsystem contains the data store, search, taxonomy and audit services.
tw_svc_model.log contains log messages from the model subsystem. At INFO level, it contains details of all searches, and information about database checkpoints. At DEBUG level, it contains a great deal of detail about how data is stored and retrieved.
Application server log
The main web user interface is hosted by an application server.
tw_appserver.log contains log message from the application server, including information about pages viewed and any errors that can occur in rendering the UI.
REST API log
tw_svc_external_api.log contains log messages from the REST API subsystem. At INFO level it does not contain much, but at DEBUG level it contains details of authentication and permission errors from failed API requests.
tw_svc_security.log contains output from the Security subsystem, which is responsible for authenticating users and authorizing user actions. It contains information about authentication failures. If LDAP-based authentication is used, it also contains information about the state of the connection to the LDAP server.
performance.log.* contain information about the appliance load, obtained by running the
top command every 10 minutes. This is useful for diagnosing performance issues.
In addition to the main subsystem logs, the logs from a number of scripts are often relevant:
tw_upgrade.log contains a log of actions performed during a BMC Discovery version upgrade.
tw_tax_import.log contains a log of the actions of the Taxonomy importer. If a Taxonomy extension fails to import, the details will be in this log.
tw_scan_control.log contains any messages output by the
tw_scan_control tool for creating scanning ranges.
tw_imp_csv.log contains messages output by the CSV import tool, which can be run from the command line as
tw_imp_csv, or via the web UI in Administration > CSV Data Import.
CMDB Synchronization logs
tw_exporter_connection_test.log contains the logs generated by the CMDB sync test connection.
tw_svc_cmdbsync_transformer.log contains the logs generated by the syncmapping files.
tw_cmdbsync_exporter.log contains the rest of the logs.
To prevent the log files growing without bound, they are rolled on a daily basis. The first time a log message is output after midnight (appliance time), the current
.log file is renamed to correspond to the date, in the form
.DD, and a new log file ending simply
.log is opened.
As it is the output of a log entry after midnight that causes the roll-over, if a subsystem has not output any log messages since midnight, the
.log file might contain entries from the previous day(s). This also means that unless execution of a script straddles midnight, script log files do not roll over, and the
.log file will contain information about every run of the script.
Logs are automatically compressed seven days after they are created. When a log is compressed it is no longer available in any of the log selector drop down lists.
Compressed logs are automatically deleted 30 days after the initial log file was created.
Old logs can be manually compressed or deleted in the log viewer, or directly using the command line.
Service "out" files
Each running component in the system captures anything that is sent to
sdtout and logs it to a file. These files are all of the form
<service name>.out and are also stored in
In normal usage, the
*.out files generally contain just a single line to indicate that the service has successfully started. Another line is stored to indicate when the service is shut down cleanly. Any other content is often an indication that something has gone awry. However, there are some exceptions to this rule. For example, the
tw_appserver.out files always have some content, as the appserver sends certain information to
stdout. This means that you need to read the contents to check for any errors that might be in there.
*.out files are rolled each time a component is started. The most recent file always ends just
.out; the previous file ends
.out.1, the next earliest
.out.2 and so on. Nine previous
.out files are kept, so the oldest file ends
Whenever the appserver component experiences an unhandled error, you are presented with an error message. When this happens the UI is obscuring a detailed traceback page which might contain useful information that can assist you with troubleshooting and debugging activities.
The appserver saves the entire html page that is being hidden as a self-contained html file that can be opened in a browser. These files are stored in the following location on the appliance:
You can manually delete these files from the Appliance command line.