Unsupported content

 

This version of the product is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

PatternPerformanceInclude

The Pattern Performance page displays timing information about TPL pattern performance. TPL provides considerable power that can be used in discovering an environment. Unfortunately, some patterns are inherently inefficient or perform poorly in specific customer environments. 

Determining the performance of individual patterns is extremely difficult, involving running Reasoning at debug level and then piecing together all the rules run for an individual pattern. Each pattern has performance information gathered for it by Reasoning, and this information is presented graphically on the Appliance Performance page. This makes it easier for customers and BMC Customer Support to determine which patterns are performing badly.

To access the Pattern Performance page

  1. From the main menu, click the Administration icon .
  2. In the Appliance section, click Performance.
  3. Click the Patterns tab.
  4. From the Date list, select the date for which you want to view the performance data.
  5. If you are viewing a cluster, from the View list, you can also select individual machines or the whole cluster.
You can use the drop-down selector to view the pattern performance statistics for any of the last 10 days. If dates are unavailable on the selector, no log has been created for those days.

Invocation and timing information (in seconds) is displayed for each pattern and is described in the following table.

 Column name

Description

Pattern Name

The name of the pattern.

Invocations

The number of times that the pattern has been invoked in the reporting period.

Total Execution TimeThe total time spent executing the pattern

Average Execution Time

The average execution time. This time excludes time spent waiting for discovery commands to be completed. It is the sum of the time spent running searches in the data store, updating the model, updating inference relationships, and any other work the pattern did including the execution of the pattern code itself.

Max Execution Time

The maximum amount of time taken to execute the pattern, as defined above.

Min Execution Time

The minimum amount of time taken to execute the pattern, as defined above.

Average Discovery Time

The average time this pattern has spent running discovery commands.

Average Search Time

The average time this pattern has spent performing searches on the datastore.

Average Modeling Time

The average time this pattern has spent creating, updating, or deleting nodes in the model.

Average Inferencing Time

The average time this pattern has spent creating, updating, or deleting the relationships between DDD and inferred nodes.

To use the Pattern Performance page

You typically view this page if you are concerned that the performance of BMC Discovery is being impacted by patterns or if you are testing a new pattern that you have written. The key points to look for are spikes in the following:

  • Average Discovery Time—A spike in the average discovery time might indicate problems.
  • Average Search Time—Where this measure appears significantly larger than all others in the column, it shows that this pattern is using the search service more than is usual. Depending on the pattern and the job it is trying to do, this might indicate a problem.
  • Average Modeling Time—Creating an excessive number of nodes in the model can increase the modeling time. Ensure that your pattern creates only the nodes that you actually need; for example, do not needlessly delete and recreate nodes.
  • Average Inferencing Time—Inferencing time is affected to a lesser extent by patterns than the search and modeling times. An increase here might indicate that the pattern creates an excessive number of nodes.

What to do if there is a problem

If the pattern page highlights one or more of the problems described previously, the course of action to take depends whether the pattern is a TKU pattern or one that you have developed in-house.

TKU patterns

TKUs are shipped each month. If a TKU pattern causes a performance spike, verify that you are using the most up-to-date TKU. If you are, report the issue to Customer Support, who will advise you about the best course of action.

In-house developed patterns

If you are using a pattern that you have developed, consider the following common errors:

  • Poor trigger—A trigger that is too wide and results in many invocations of a pattern which end up writing no data. This might be highlighted by a large mismatch in the number of invocations against the number of SIs managed by the pattern. You can find out how many SIs are being managed by a pattern in the Maintained Software Instances of the pattern's page. As a general rule, it is two orders of magnitude more efficient to stop in the trigger rather than in the pattern body.
  • Delay in stopping—Patterns are triggered, and then run and continue to process data until they or finished or they reach a condition that stops the pattern. Where a condition could cause the pattern to stop early, it should be placed as early as possible in the pattern to avoid wasted processing.
  • Pattern doing too much work—Individual patterns should ideally perform one task; for example, creating a software instance. The following examples describe patterns that do too much work:
    • Creating SIs and a BAI in a single pattern—Rather, you should break the pattern down so that each pattern creates one type of node that triggers the next.
    • Loops containing discovery calls—Try to make these calls once, and then access the stored results, as necessary, from the loops.
    • Creating an SI with multiple detail nodes—Try to simplify or reduce the amount of detail stored.


Was this page helpful? Yes No Submitting... Thank you

Comments