This page provides hardware specifications for machines running BMC Discovery on standalone or cluster environments. The following topics are provided in this section:
What hardware should I run BMC Discovery on?
BMC Discovery clustering enables you to use two or more machines to increase scanning speed, system responsiveness, and total amount of data that can be handled. BMC Discovery can help you reach levels of scale not achievable with current hardware on a single machine, or allow you to reach the same scale with cheaper hardware. BMC Discovery runs well on a variety of platform specifications. With a better hardware resourced allocate to BMC Discovery when deployed as a VM:
- scans faster
- is more responsive to logged in users
- can store more data
BMC Discovery, whether running standalone or as part of a cluster, performs well on the following minimum specification hardware. These are guidelines, and not hard limits. Attempting to run BMC Discovery on lower specification hardware might result in poor performance. Even on hardware matching this specification, BMC Discovery will not run particularly well – discovery speeds will be quite low, and the 50 GB disk will soon fill.
Any 64 bit processor
100 Mb Ethernet
Increasing the amount of RAM, CPU speed, cores, disk, and network speeds steadily increases the performance of BMC Discovery. It is best to increase specifications across the board, otherwise one element will become the bottleneck. For example, there is little to be gained by putting two modern 12-core Xeons in a machine with 4 GB RAM. Instead, divide the investment between CPU, RAM, and disk.
There is a point, though, where BMC Discovery no longer makes good use of the hardware. We have improved the way in which BMC Discovery uses powerful hardware in each of the last few releases and continue to improve it, but for now there is not much to gain by going beyond 12 or so cores. More RAM will always help, but beyond around 64 GB it may not be worth the additional expense.
Considering the hardware available today, the following specification range gives a good performance/price ratio:
16 - 32 GB
For environments in which you discover storage, you should have more than 16GB RAM.
4 - 12
2 x 3.2 GHz Xeon E3
Disk size should be chosen according to how much you need to scan and how much directly discovered data history you want to retain.
From BMC Discovery 10.2 CMDB synchronization supports multiple CMDB connections. Disk space consumed for each device (for example a host node) for each connection, is typically 200kB, though this may vary depending on your data.
Disk performance is also important. When actively scanning, BMC Discovery places a heavy read and write load on the database. Consequently the performance of BMC Discovery benefits greatly from giving it the fastest disks possible; the slower your disk I/O, the slower BMC Discovery will be. As a rule of thumb, if you are spending money on infrastructure to support BMC Discovery, then choosing to spend it on fast storage first is a good strategy. For example:
- Local Solid State Disks—SSDs local to the appliance are likely to be the fastest option. This is true for physical appliances as well as virtual appliances where the hypervisor's datastores map to SSDs that are local to the hypervisor.
- Remote Storage Area Network—although likely to be slower than local SSDs, remote storage can be a good choice, assuming that the storage is close in network terms to the consuming appliance. Again SSDs will be faster than HDDs.
- Local Hard Disk Drive—local HDDs are likely to be the cheapest, but also slowest option. If forced to use HDDs then always use the fastest possible.
The CPU given is an example. Some kind of mid-range modern CPU is ideal.
16 GB of RAM is fine, but you are likely to get considerably better performance with 32 GB.
Memory and swap
Memory and swap usage depends on the nature of the discovery being performed, with Mainframe and VMware (vCenter/vSphere) devices requiring more than basic UNIX and Windows devices. You can determine whether additional memory is needed in your appliance by monitoring swap usage.
The disk configuration utility uses the following calculation to determine the best swap size.
- Where the amount of memory is less than 16 GB swap size is set at double the memory size.
- Where the amount of memory is between 16 and 32 GB swap is set at 32 GB.
- Where the amount of memory exceeds 32 GB swap is set to equal the memory.
The recommended figures for swap can be exceeded, there is no harm in doing so. It might prove simpler to configure the higher quantity of swap than to extend an existing swap partition as this will allow the RAM demand to be derived as above. All virtual appliances are initially configured with 8 GB swap.
How does Big Discovery change things?
Big Discovery enables you to scan more, quicker, and with modest hardware. For example, instead of deploying BMC Discovery 9 on a 64 GB, 16-core machine, you might install BMC Discovery 10 on a cluster of four 16 GB, 4-core machines, and the performance is likely to actually be slightly better.
You can increase scanning speed by increasing the number of machines in your cluster, and increasing the specification of each machine. It is your choice how you go about this. You could upgrade all the machines in your 3-machine cluster from 16 GB RAM to 32 GB RAM, or you could add another 16 GB machine to the cluster to make a 4-machine cluster. Both increase performance; the one you decide on will depend on your business policies.
How many machines do I need to scan my environment?
It is very difficult to say how much hardware you need to scan a given environment. The performance of BMC Discovery depends on many factors:
- Performance of your network.
- The environment being scanned
- dark space vs non dark space
- size of machines being scanned
- amount of software running on those machines
- and many other considerations
- Patterns you have loaded.
- and so on
However, as a rough guideline, you can scan an environment of 10,000 to 15,000 machines every day with a single BMC Discovery version 10 appliance running on reasonable hardware.
Should you turn fault tolerance on? Here are the reasons for and against:
- Enabling fault tolerance means a machine in your cluster can fail (see below) and you suffer no data loss and very limited service interruption (in the order of minutes).
- Enabling fault tolerance slows scanning speed.
- Enabling fault tolerance means you can store less data.
A “failing” machine means:
- Hardware failure
- Network failure
- Power loss
With fault tolerance disabled, and assuming each machine in a cluster is identical, each machine added to a cluster increases its scanning speed. With fault tolerance enabled, the slow down in scanning speed is evident.
Number of hosts that can be scanned, scanning once per day
|Fault tolerance disabled||Fault tolerance enabled|
10,000 – 15,000
15,000 – 25,000
15,000 – 20,000
20,000 – 35,000
20,000 – 25,000
30,000 – 45,000
25,000 – 35,000
40,000 – 65,000
30,000 – 45,000
50,000 – 80,000
40,000 – 55,000
Consolidation does not change these figures – they apply whether you are scanning or consolidating. So, to consolidate 40,000 hosts a day, you need a cluster of 6 machines – just as if you want to scan 40,000 hosts a day.
Note that these figures are for a cluster running on reasonable hardware. It is possible to get the same performance with fewer machines by using more powerful hardware.
In previous releases, sizing guidelines were provided by defining classes of appliance deployment. The introduction of Big Discovery means that these classes are no longer valid for clustered deployments. They do still have some relevance for single appliance deployments, but where scale is required, just add hardware!
The appliance specification page in the UI retains the classes for standalone deployments, but not in clusters. We are reviewing BMC Discovery performance as version 10 is deployed, and might extend our recommendations based on our customer’s real experiences.
For more information on the previous recommendations, see Sizing guidelines in the version 9.0 documentation.