Rate (rolled up, condensed) information

Raw performance data is statistically computed and rolled into hourly data called Rate data. This enables you to retain data for an extended period of time (90 days) without increasing disk storage or having a negative impact on database performance.

Creating views for Rate data is relatively easy for most monitor types as data is organized in individual (horizontal) tables. This enables a one to one mapping of a Rate table to a view for each monitor type that is available in the Infrastructure Management system. Rate dataviews are named as <tablename prefix>_RT_VIEW. Each of these Rate dataviews contains the following information:

Rate Information 




Monitor instance ID. This is a unique number generated for each instance. The value is assigned by the Infrastructure Management system during monitor creation. This value is internal and fixed. This number identifies all other data for each instance.


UNIX time stamp for the start of the duration


UNIX time stamp for the end of the duration

Stats Attribute NameAVG_Stats Attribute NameHIGH_Stats Attribute Name_LOW

This contains Rate values for the above FROMTIME to TOTIME duration. Each attribute of this monitor type has three Rate values.

Rate data calculation

The condensed (Rate) data is calculated based on the condensed hourly samples of distributed raw data points. Hourly samples are made up of maximum, average, and minimum value for that hour.

For data with Normal distribution, the highest data point for the hour is taken as the maximum value, the average value is the true average of all points, and the lowest data point is taken as the minimum value. For all other data, the maximum value is taken as the 90th percentile, (derived by ignoring the top 10% of data points), the average value is taken as the median, and the minimum value is taken as the 10th percentile (derived by ignoring the bottom 10% of data points). Stated another way:

For data with Normal distribution:

  • Maximum value is the true maximum
  • Average value is the true average
  • Minimum value is the true minimum

For all other data:

  • Maximum value is got by ignoring the top 10% of data points (i.e., consider the 90th percentile)
  • Average value is got by taking the median (i.e., consider the 50th percentile and ignore bottom 50% of data points)
  • Minimum value is got by ignoring the bottom 10% of data points (i.e., consider the 10th percentile)


The median is typically found by arranging the points from lowest value to highest value and picking the middle one. If there is an even number of points, then the median is taken to be the average of the two middle values.

Examples of attributes with Normal Distribution set to true are Availability, Total CPU, and FileSize. For these attributes, you do not want to discard any values when converting to hourly samples. Instead, you want the absolute high and absolute low recorded.

Examples of attributes with Normal Distribution set to false are Ping ResponseTime and WebURLResponseTime. The reason is that response time measurements for such attributes have fluctuations that are well outside the normal range and skew the hourly calculations if included. Therefore, the extremes at the upper and lower end are discarded.

Was this page helpful? Yes No Submitting... Thank you