Sizing and scalability requirements for the Database Server
TrueSight Capacity Optimization supports Oracle and PostgreSQL databases. For Oracle, TrueSight Capacity Optimization leverages Oracle Real Application Cluster (RAC) architecture to scale out. For PostgreSQL, the current compatibility offering for the database supports large environments that can handle <= 80 million records per day with the right resource allocation.
TrueSight Capacity Optimization database servers can be scaled horizontally and vertically. Use the sizing guidelines for the database server and storage to determine the appropriate hardware capacity that is required for deploying the database.
Major drivers for sizing | Guidelines | |
---|---|---|
Database server |
|
|
Storage |
|
|
Additional factors that influence the sizing and scaling of storage
The preceding sizing values, specifically the amount of required storage, are only rough estimates. For more accurate values, also consider the following information:
Aging policies: They determine how long data is stored in the data warehouse and the amount of data that is processed when data is loaded into TrueSight Capacity Optimization (for example, when importing historical data).
- Out-of-sync samples: These samples are generated automatically by the data warehouse when samples do not coincide with the system clock (for example, samples taken at 30-minute intervals that are not aligned with the top and bottom of the hour). Additional storage is required for these additional samples, and it can be estimated as 10 percent of the overall number of samples for system metrics (and 1 percent for business driver KPIs). These values depend on both the specific data sources and connectors, which can truncate time stamps or summarize samples. These values can be used as a corrective factor on the generated estimates.
To show how these factors can affect the amount of storage required, consider an example of 5,000 managed entities with hourly samples for an average of 100 metrics per system, which generates 12 million daily samples. Based on preceding guidelines, 12 million daily samples requires 600 GB of storage. With a conservative out-of-sync sample rate of 10 percent, the estimate reaches 13.2 million daily samples, which requires 660 GB of storage. Plus, if the aging policy at the detail level is set at 95 months, the estimated storage required becomes 720 GB.
You can safely ignore the effect that business driver KPIs have on storage requirements. Even as many as 500 business driver KPIs collected at hourly intervals does not change significantly the overall number of daily samples.
Note
The estimation provided here considers 5,000 generic servers with 40 metrics at hour resolution, and that resolves to 280 GB with default aging (about 5 million records/day, also considering business driver KPIs).
Comments
Log in or register to comment.