Sizing and scalability considerations
The sizing baselines specified are based on the performance lab benchmark test results performed in BMC Helix’s test labs. You can use these baselines for your on-premises BMC Helix IT Operations Management (BMC Helix ITOM) deployment.
The following applications were tested for BMC Helix ITOM sizing considerations:
- BMC Helix Dashboards
- BMC Helix Intelligent Automation
- BMC Helix Developer Tools
- BMC Helix Log Analytics
- BMC Helix Operations Management
- BMC Helix Portal
- BMC Helix Service Monitoring (BMC Helix AIOps)
BMC Helix’s performance testing is based on four different system usage profiles: compact, small, medium, and large.
Profile | Description |
Compact | Minimal footprint for small-scale environments |
Small | Suitable for limited production workloads |
Medium | Recommended for standard enterprise deployments |
Large | For high-scale, high-throughput environments |
The compact is a special sizing that is the minimum requirement for a functional BMC Helix Platform system. Compact systems are recommended only for POC systems, where resilience and system performance under load is not a consideration. All compact systems cited on this page are non-high-availability deployments for BMC Helix Operations Management and the BMC Discovery . We recommend the compact sizing for a POC because it is a single-replica deployment.
If your usage exceeds the maximum numbers for the large sizing, contact BMC Support for guidance on how to size your infrastructure.
Parameters | Compact | Small | Medium | Large |
Total number of devices. Important: Make sure that you do not exceed the number of monitored instances per device. | 100
| 3000
| 7500
| 15000
|
Monitored instances (Other sources, such as Prometheus and Rest API) | 1000 | 100000 | 250000 | 500000 |
Number of monitored instances per device | 10 | 33 | 33 | 33 |
Monitored attributes (Other sources, such as Prometheus and Rest API) | 10000 | 600000 | 1500000 | 3000000 |
Number of Attributes per device | 100 | 200 | 200 | 200 |
Events per day (Alarm, Anomalies and External Events) | 5000 | 30000 | 75000 | 1500000 |
Configuration policies | 100 | 1000 | 1500 | 10000 |
Number of policies per device | 1 | 2 | 2 | 2 |
Number of groups up to | 50 | 1500 | 2500 | 4500 |
Number of concurrent users | 5 | 50 | 100 | 150 |
BMC Helix AIOps |
|
|
|
|
Services | 25 | 500 | 1000 | 3000 |
Situations | 10 | 300 | 500 | 1000 |
BMC Helix Continuous Optimization |
|
|
|
|
Ingestion of samples per day in million | 50 mil | 50 mil | 100 mil | 500 mil |
BMC Helix Log Analytics |
|
|
|
|
Log ingestion per day Important: BMC has certified 250 connectors for a single tenant. | 500MB | 30GB | 100GB | 250GB |
Number of Logstash | 1 | 5 | 10 | 50 |
Number of days for retention | 3 | 3 | 3 | 3 |
- Number of MIs (Monitored Instances) per device:
The values shown are based on internal standard benchmarks and may appear consistent across Small, Medium, and Large, but in practice, the number of MIs per device varies from agent to agent depending on the type and complexity of the monitored device.
As a reference, an agent can support up to 40,000 MIs. The current number (33 MIs per device) is a standardized baseline we use internally for sizing calculations, and it reflects a conservative estimate derived from real-world deployments. - Number of Attributes per device:
Similar to MIs, this number is standardized based on internal data. While actual numbers may vary, 200 attributes per device is a safe average used for capacity planning purposes. - Configuration Policies:
The configuration policies include:- Monitoring policies
- Alarm policies
- Event policies
- Blackout policies
- Multivariate policies
- Example of a Monitoring Instance (MI):
A Monitoring Instance refers to an individual metric or component being monitored—such as CPU usage, memory, disk partition, or interface status. For example, if a device has CPU, memory, and two disks being monitored, it would result in 4 MIs.
Kubernetes infrastructure sizing requirements
Compute requirements are the combined requirements of CPU, RAM, and Persistent Volume Disk requirements for the Kubernetes worker nodes.
These compute requirements are shared between all the worker nodes in your Kubernetes cluste r. The worker nodes in your Kubernetes cluster must have CPU and RAM that matches or exceeds the total infrastructure sizing requirement plus the per worker node logging requirement. This is required to support the anticipated load for the benchmark sizing category for a BMC Helix IT Operations Management deployment.
Considerations when building a Kubernetes cluster
There are several considerations when building a Kubernetes cluster regarding sizing before considering the application requirements. The application requirements are meant to be included in addition to your other resource requirements. This could include but not be limited to:
- Kubernetes control plane nodes
- Kubernetes management software requirements
- Host operating system requirements
- Additional software (for example: monitoring software) that is deployed on the cluster
It is important to refer to your distributors and vendors to make sure additional requirements are also included in any cluster planning.
Calculate Your Deployment Sizing
- Select your profile size (Compact, Small, Medium, Large).
- Identify the product components you plan to deploy.
- Use the sizing tables to sum the CPU and memory requirements for each component.
- If deploying multiple components, add their values together.
- Do not attempt to deduct shared infrastructure like BMC Helix Platform Common Services and Infra services, unless you have explicit sizing data for those components.
The sizing tables for BMC Helix Operations Management and BMC Helix Continuous Optimization are designed to reflect the full load of each product, including shared services. When combining BMC Helix Operations Management and BMC Helix Continuous Optimization, the total sizing already accounts for the infrastructure needed to support both products. Therefore, reducing or subtracting shared components may result in under-provisioning and is not recommended.
In such cases:
- Use the larger profile between the two products.
- Add the CPU and memory values from each table without deduction.
- If your deployment is resource-constrained or highly customized, contact BMC Support for optimization guidance.
Note: The sizing tables are intentionally conservative to ensure performance and scalability. Overestimating slightly is preferable to underestimating.
Example 1: BMC Helix Operations Management and BMC Helix AIOps Deployment
- Profile: Medium
- Components:
- BMC Helix Operations Management + Intelligent Integration + Intelligent Automation (Core Stack): 67 cores, 370 GB
- BMC Helix AIOps + AutoAnomaly Add-On: 22 cores, 157 GB
- Total: 89 cores, 527 GB memory
This configuration excludes Log Analytics and BMC Helix Continuous Optimization. Use this when deploying the core BMC Helix IT Operations Management stack with BMC Helix AIOps capabilities.
Example 2: Full Stack Deployment with BHCO
- Profile: Medium
- Components:
- BMC Helix Operations Management + BMC Helix Intelligent Integration + BMC Helix Intelligent Automation (Core Stack): 67 cores, 370 GB
- BMC Helix AIOps + AutoAnomaly Add-On: 22 cores, 157 GB
- BMC Helix Continuous Optimization (Standalone): 38 cores, 180 GB
- Total: 127 cores, 707 GB memory
When deploying BMC Helix Operations Management and BMC Helix Continuous Optimization together, do not attempt to subtract shared infrastructure (BMC Helix Platform Common Services and Infra services). The sizing tables already account for the full load of each product. Use the larger profile and sum the values conservatively to ensure performance.
Kubernetes cluster requirements
The application must have specific hardware resources made available to it for successful deployment and operation. Any competing workloads (such as your Kubernetes management or monitoring software) on the cluster and host operating system requirements must be considered in addition to the BMC Helix IT Operations Management suite requirements when building your Kubernetes cluster.
The following table represents the minimum amount of computing resources that must be made available by the Kubernetes cluster to the BMC Helix IT Operations Management deployment:
Deployment size | CPU (Core) | RAM (GB) |
Compact | 28 | 182 |
Small | 78 | 401 |
Medium | 99 | 580 |
Large | 266 | 1350 |
Core stack requirements
Deployment size | CPU (Core) | RAM (GB) |
Compact | 22 | 112 |
Small | 56 | 256 |
Medium | 67 | 370 |
Large | 179 | 932 |
BMC Helix AIOPs and Autoanomaly add-ons
This must be added on top of BMC Helix Operations Management deployment.
Deployment size | CPU (Core) | RAM (GB) |
Compact | 4 | 35 |
Small | 12 | 98 |
Medium | 22 | 157 |
Large | 62 | 326 |
BMC Helix Log Analytics add-ons
BMC Helix Log Analytics can be deployed:
- As a standalone product, or
- As an add-on to BMC Helix Operations Management deployment.
Deployment size | CPU (Core) | RAM (GB) |
Compact | 2 | 35 |
Small | 10 | 46 |
Medium | 11 | 54 |
Large | 25 | 92 |
Sizing requirements for BMC Helix Continuous Optimization
The following table provides the sizing requirements of BMC Helix Continuous Optimization standalone deployment. If BMC Helix Continuous Optimization is deployed alongside BHOM, ensure shared components (e.g., PCS, Infra) are counted only once.
Deployment size | CPU requests (Millicore) | CPU limits (Millicore) | MEM request (GB) | MEM limit (GB) |
---|---|---|---|---|
Compact | 20400 | 104050 | 78.6 | 205.7 |
Small | 54280 | 180950 | 172.2 | 336.8 |
Medium | 67780 | 263150 | 309.1 | 544.2 |
Large | 154180 | 444300 | 780.1 | 1525.1 |
Kubernetes quotas
Quotas may be set up on the cluster namespaces to enforce maximum scheduled requests and limits. Any attempt to schedule additional workloads beyond configured quotas will result in Kubernetes preventing the scheduling which may complicate successful software operations in the namespace.
The following table shows the recommended settings to allow a BMC Helix IT Operations Management suite deployment:
Deployment Size | CPU requests (Millicore) | CPU limits | MEM requests GB | MEM limits GB |
Compact | 29474 | 162446 | 188 | 369 |
Small | 82274 | 301220 | 390 | 686 |
Medium | 102864 | 404496 | 588 | 1056 |
Large | 273684 | 717646 | 1378 | 1962 |
Kubernetes node requirements
Your cluster must maintain a minimum number of worker nodes to provide an HA-capable environment for the application data lakes.
To support the loss of worker nodes in your cluster you must provide extra worker nodes with resources equal to your largest worker node. This way, if a worker node goes down you will maintain the minimum number of resources required in the cluster to recover the application.
For example: If you have 4 nodes of 10 vCPU and 50GB RAM, you will need a 5th node of 10 vCPU and 50GB RAM to not have recovery impacted by the loss of one worker node.
Deployment Size | Minimum worker nodes available |
---|---|
Compact | 4 |
Small | 6 |
Medium | 6 |
Large | 9 |
Worker node disk requirements
Kubernetes worker nodes require the following free disk space allocation for container images:
Requirement | Value |
---|---|
Worker node system disk | At least 150 GB |
Pod specifications
The spreadsheet provides detailed information for sizing your environment. Cluster architects can use the information to help determine the node sizes and cluster width.
Consider the following resource requirements of the largest pod:
- In a large deployment, the largest pod requires 13 CPUs and 34 GB of RAM.
- In a medium deployment, the largest pod requires 7 CPUs and 17 GB of RAM.
- In a small deployment, the largest pod requires 7 CPUs and 8 GB of RAM.
- In a compact deployment, the largest pod requires 3 CPUs and 7 GB of RAM.
When reviewing the specification spreadsheet, check the large replica counts to ensure that your cluster width is sufficient.
Persistent volume requirements
The high performance of Kubernetes Persistent Volume Disk is essential for the overall system performance. BMC supports a Bring-Your-Own-Storage class for Kubernetes persistent volumes.
The following tables show the disk requirements in GB:
Block Storage (GB)
Compact | 2454 |
Small | 4842 |
Medium | 7102 |
Large | 23242 |
Read Write Many Storage (GB)
Compact | 91 |
Small | 91 |
Medium | 91 |
Large | 91 |
We recommend that you use solid-state drive (SSD) with the following specifications:
Block Storage SSD Recommendations
Specification | Compact | Small | Medium | Large |
---|---|---|---|---|
Average latency | < 100ms | < 100ms | < 100ms | < 100ms |
Write throughput | 20 MB/s | 150 MB/s | 165 MB/s | 200 MB/s |
Read throughput | 100 MB/s | 800 MB/s | 1 GB/s | 1.2 GB/s |
IOPS Write | 1K | 3K | 3.2K | 3.5K |
IOPS Read | 3K | 10K | 11K | 12K |
RWM throughput and IOPS requirements:
Specification | Value |
Read throughput | 10 MBPS |
Write throughput | 5 MBPS |
IOPS Read | 3K |
IOPS Write | 1K |
Sizing guidelines for the BMC Discovery
Deployment Size | CPU | RAM (GB) | Disk (GB) | Number of servers per environment |
---|---|---|---|---|
Compact (not in high availability) | 4 | 8 | 100 | 1 |
Small | 16 | 32 | 300 | 3 |
Medium | 16 | 32 | 500 | 3 |
Large | 20 | 64 | 1000 | 5 |
For BMC Discovery sizing guidelines, refer to the Sizing and scalability considerations topic in the BMC Discovery documentation.
EFK Logging requirements
Deploying the Helix logging stack comes with additional hardware requirements that the Kubernetes cluster must be able to provide as well as the expected namespace quotas.
Deployment size | CPU (Core) | RAM (GB) | PVC (with 3 days retention) |
Compact | 2 | 19 | 200 GB |
Small | 6 | 19 | 500 GB |
Medium | 10 | 20 | 1100 GB |
Large | 12 | 31 | 2100 GB |
EFK Logging Quota
Deployment size | CPU Requests | MEM Requests (GB) | CPU Limits (Millicore) | MEM Limits (GB) |
---|---|---|---|---|
Compact | 1600 | 9.25 | 11500 | 23 |
Small | 1600 | 9.25 | 14500 | 23 |
Medium | 2200 | 9.25 | 16500 | 23 |
Large | 7280 | 30.25 | 32500 | 35 |
FluentBit Daemonset
The Helix logging stack utilizes FluentBit collectors as a daemonset to access logs on the worker nodes. Requirements for the collectors are in addition to the previous requirements and depend on the number of worker nodes that are in the cluster. Use the table below to determine the size of the pod in your deployment and multiply the requirements by the number of pods in your cluster. The cluster will additionally require the value of requests calculated.
Deployment size | CPU Requests (Millicore) | CPU Limits (Millicore) | MEM Requests (GB) | MEM Limits (GB) |
---|---|---|---|---|
Compact | 50 | 60 | 0.15 | 0.18 |
Small | 50 | 60 | 0.15 | 0.18 |
Medium | 210 | 250 | 0.15 | 0.18 |
Large | 210 | 250 | 0.28 | 0.32 |
For example, to get the total quota value, multiply your worker node count with a value in Fluenbit Daemonset table, and add a value in the EFK Logging Quota table.
Assume that you have 4 worker nodes in your compact size cluster. Your total quota calculation will be:
4 * 50 + 2200 = 2400 m
Disaster recovery requirement
If you enable disaster recovery, you will need additional processor, memory, and disk space to operate successfully. The following guidance is based on using the default disaster recovery configurations. Any modification to these settings might impact the amount of disk storage that is necessary and must be recalculated.
The following tables list the additional resources required in the Kubernetes cluster (per data center):
Deployment size | CPU (Core) | RAM (GB) | MinIO storage per PVC | Total MinIO storage requirement (4 PVC) |
---|---|---|---|---|
Compact | 6 | 30 | 900 | 3600 |
Small | 10 | 38 | 1050 | 4200 |
Medium | 11 | 49 | 2225 | 8900 |
Large | 12 | 62 | 10625 | 42501 |
The following tables list the additional recommendations to add to the namespace quotas (per data center):
BMC Helix IT Operations Management Namespace Quotas (DR Additions) | ||||
---|---|---|---|---|
Deployment size | CPU Requests (Millicore) | CPU Limits (Millicore) | MEM Requests (GB) | MEM Limits (GB) |
Compact | 6000 | 26000 | 30 | 85 |
Small | 10000 | 30000 | 38 | 86 |
Medium | 11000 | 36000 | 49 | 91 |
Large | 12000 | 55000 | 62 | 112 |
RPO and RTO measurements
Recovery Point Objective (RPO) is the time-based measurement of tolerated data loss. Recovery Time Objecting (RTO) is the targeted duration between an event failure and the point where the operations resume.
The following table lists the RPO and RTO measurements:
Deployment size | Recovery Point Objective (RPO) | Recovery Time Objecting (RTO) | Loss in productivity |
---|---|---|---|
Compact | 24 hours | 1 hour 30 minutes | 3 hours |
Small | 24 hours | 1 hour 30 minutes | 3 hours |
Medium | 24 hours | 2 hours | 4 hours |
Sizing requirement to enable automatic generation of anomaly events
The auto anomaly services are part of BMC Helix Operations Management and BMC Helix AIOps.
For more information, see Autoanomalies in the BMC Helix Operations Management documentation.
If you enable automatic generation of anomaly events, you will need additional processor, memory, and disk space to operate successfully. Make sure you add these resources to your cluster.
The following tables list the additional resources required to configure automatic generation of anomaly events:
Deployment size | CPU (Core) requests | MEM (GB) Requests | CPU (Cores) Limits | MEM (GB) Limits | PVC (GB) |
---|---|---|---|---|---|
Compact | 2 | 12 | 8 | 19 | 20 |
Small | 7 | 42 | 23 | 65 | 50 |
Medium | 11 | 77 | 37 | 128 | 150 |
Large | 34 | 162 | 78 | 233 | 300 |
Sizing considerations for migrating from PostgreSQL database 15.x to 17.x
To migrate data from PostgreSQL database 15.9 to 17.x you must run the PostgreSQL migration utility.
For the migration to be successful, in addition to the resources listed in this topic, the following processor, memory, and storage are required:
Deployment size | CPU (Core) request | MEM (Gi) Requests | CPU (Cores) Limits | MEM (Gi) Limits | PVC (Gi) |
---|---|---|---|---|---|
Compact | 4 | 5 | 13 | 33 | 140 |
Small | 4 | 6 | 13 | 35 | 140 |
Medium | 4 | 6 | 21 | 34 | 195 |
Large | 7 | 8 | 60 | 115 | 250 |
You can reclaim the resources after the upgrade.
The following table gives information about the time to migrate data from PostgreSQL database 15.x to 17.x:
| ||||||||||
Sizing requirements to configure the Self-monitoring solution
A Windows VM for BMC Discovery Outpost.
See BMC Discovery Outpost system requirements in the BMC Discovery documentation.
- To accommodate BMC Helix Monitoring Agents, the following additional resources are needed in the production cluster:
- Memory: 7680Mi
- CPU: 2500m