Sizing and scalability considerations for the Database Server


TrueSight Capacity Optimization database servers can be scaled horizontally and vertically. The major sizing drivers for database servers are:

  • The required data processing throughput, which is determined by the number of entities, the average number of metrics collected for each entity, and the sampling interval
  • The overall number of reports

Note

The current compatibility offering for the PostgreSQL database supports large environments that can handle <= 80 million records per day with the right resource allocation.

TrueSight Capacity Optimization leverages Oracle Real Application Cluster (RAC) architecture to scale out.

Important

This topic provides general sizing and scalability guidelines. Contact the BMC Customer Support for your sizing requirements.

Sizing database servers

Follow these guidelines when sizing TrueSight Capacity Optimization database servers:

  1. A TrueSight Capacity Optimization database server requires a minimum base configuration. With this configuration, the database server supports up to 20 million samples a day, in an eight-hour processing window.
  2. For every additional 10 million samples a day, you need 1 CPU core @ 2 GHz and an additional 4 GB RAM.

Sizing storage

The major drivers for sizing storage are:

  • The number of samples or records to process, which is determined by the number of time series and their time resolution.
  • The configured aging policies.

Follow these guidelines when sizing storage:

  1. SCSI disks are the minimum requirement for all configurations. However, BMC recommends using a storage area network (SAN), especially for medium-sized scenarios and larger.
  2. You can estimate the minimum I/O throughput for database storage required to complete data warehousing activities in a window of 8 to 10 hours by using the following guideline: 100 I/O operations per second (IOPS) for every one million rows loaded per day, and the required throughput increases linearly as the number of samples increases. For example, approximately 1,000 IOPS are required for 10 million samples per day.
  3. Using the default aging policies, approximately 50 GB of storage are required for every one million samples per day, and the required storage increases linearly as the number of samples increases. For example, 500 GB are required for 10 million samples per day.

    Note

    These estimates are based on standard BMC TrueSight Capacity Optimization summarization policies.

  4. BMC recommends at least two sets of disks, with the IOPS distributed evenly among each set of disks.
  5. For large scenarios, BMC recommends the Oracle Partitioning option and Oracle Automatic Storage Management (ASM).

Additional factors that influence the sizing and scaling of storage

The preceding sizing values, specifically the amount of required storage, are only rough estimates. For more accurate values, also consider the following information:

  • Aging policies: They determine how long data is stored in the data warehouse and the amount of data that is processed when data is loaded into BMC TrueSight Capacity Optimization (for example, when importing historical data).

  • Out-of-sync samples: These samples are generated automatically by the data warehouse when samples do not coincide with the system clock (for example, samples taken at 30-minute intervals that are not aligned with the top and bottom of the hour). Additional storage is required for these additional samples, and it can be estimated as 10 percent of the overall number of samples for system metrics (and 1 percent for business driver KPIs). These values depend on both the specific data sources and connectors, which can truncate time stamps or summarize samples. These values can be used as a corrective factor on the generated estimates.

To show how these factors can affect the amount of storage required, consider an example of 5,000 managed entities with hourly samples for an average of 100 metrics per system, which generates 12 million daily samples. Based on preceding guidelines, 12 million daily samples requires 600 GB of storage. With a conservative out-of-sync sample rate of 10 percent, the estimate reaches 13.2 million daily samples, which requires 660 GB of storage. Plus, if the aging policy at the detail level is set at 95 months, the estimated storage required becomes 720 GB.

You can safely ignore the effect that business driver KPIs have on storage requirements. Even as many as 500 business driver KPIs collected at hourly intervals does not change significantly the overall number of daily samples.

Note

The estimation here considers 5,000 generic servers with 40 metrics at hour resolution, and that resolves to 280 GB with default aging (about 5 million records/day, also considering business driver KPIs).

Was this page helpful? Yes No Submitting... Thank you

Comments