This page provides information on recommended specification levels for machines running standalone or as part of a cluster. It contains the following sections:

What hardware should I run BMC Discovery on?

BMC Discovery runs well on a variety of platform specifications. The more powerful the hardware on which BMC Discovery runs, or the proportion of the hardware allocated to it when deployed as a virtual machine, it:

  • scans faster
  • is more responsive to logged in users
  • can store more data

Big Discovery clustering enables you to use two or more machines to increase scanning speed, system responsiveness, and total amount of data that can be handled. Big Discovery can help you reach levels of scale not achievable with current hardware on a single machine, or allow you to reach the same scale with cheaper hardware.

Minimum specification

BMC Discovery, whether running standalone or as part of a cluster, will run on the following minimum specification hardware:


4 GB



CPU speed

Any 64 bit processor


50 GB


100 Mb Ethernet

This is still only a guideline, not a hard limit. Attempting to run BMC Discovery on lower specification hardware than this will result in very poor performance. Even on hardware matching this specification, BMC Discovery will not run particularly well – discovery speeds will be quite low, and the 50 GB disk will soon fill.

A “maximum specification”

Increasing the amount of RAM, CPU speed, cores, and disk and network speeds steadily increases the performance of BMC Discovery. It is best to increase specifications across the board, otherwise one element will become the bottleneck. For example, there is little to be gained by putting two modern 12-core Xeons in a machine with 4 GB RAM. Instead, divide the investment between CPU, RAM, and disk.

There is a point, though, where BMC Discovery no longer makes good use of the hardware. We have improved the way in which BMC Discovery uses powerful hardware in each of the last few releases (10.x, 9.x, and 8.3) and continue to improve it, but for now there is not much to gain by going beyond 12 or so cores. More RAM will always help, but beyond around 64 GB it may not be worth the additional expense.

Recommended specification

Considering the hardware available today, the following specification range gives a good performance/price ratio:


8 - 32 GB


4 - 12

CPU speed

2 x 3.2 GHz Xeon E3


See below.


Gigabit Ethernet

Disk size should be chosen according to how much you need to scan and how much directly discovered data history you want to retain. Disks should be local (not SAN) and you should have access to multiple spindles – at least two. Having two spindles enables you to separate the data store data from the datastore transaction logs, which greatly improves performance.

From BMC Discovery 10.2 CMDB synchronization supports multiple CMDB connections. Disk space consumed for each device (for example a host node) for each connection, is typically 200kB, though this may vary depending on your data.

The CPU given is an example. Some kind of mid-range modern CPU is ideal.

8 GB of RAM is fine, but you are likely to get considerably better performance with 16 GB.

Memory and swap

Memory and swap usage depends on the nature of the discovery being performed, with Mainframe and VMware (vCenter/vSphere) devices requiring more than basic UNIX and Windows devices. You can determine whether additional memory is needed in your appliance by monitoring swap usage.

The disk configuration utility uses the following calculation to determine the best swap size.

  • Where the amount of memory is less than 16 GB swap size is set at double the memory size.
  • Where the amount of memory is between 16 and 32 GB swap is set at 32 GB.
  • Where the amount of memory exceeds 32 GB swap is set to equal the memory.

The recommended figures for swap can be exceeded, there is no harm in doing so. It might prove simpler to configure the higher quantity of swap than to extend an existing swap partition as this will allow the RAM demand to be derived as above. All virtual appliances are initially configured with 8 GB swap.

How does Big Discovery change things?

Big Discovery enables you to scan more, quicker, and with modest hardware. For example, instead of deploying BMC Discovery 9 on a 64 GB, 16-core machine, you might install BMC Discovery 10 on a cluster of four 16 GB, 4-core machines, and the performance is likely to actually be slightly better.

You can increase scanning speed by increasing the number of machines in your cluster, and increasing the specification of each machine. It is your choice how you go about this. You could upgrade all the machines in your 3-machine cluster from 8 GB RAM to 16 GB RAM, or you could add another 8 GB machine to the cluster to make a 4-machine cluster. Both increase performance; the one you decide on will depend on your business policies.

How many machines do I need to scan my environment?

It is very difficult to say how much hardware you need to scan a given environment. The performance of BMC Discovery depends on many factors:

  • Performance of your network.
  • The environment being scanned
    • dark space vs non dark space
    • size of machines being scanned
    • amount of software running on those machines
    • and many other considerations
  • Patterns you have loaded.
  • and so on

However, as a rough guideline, you can scan an environment of 10,000 to 15,000 machines every day with a single BMC Discovery version 10 appliance running on reasonable hardware.

Fault tolerance

Should you turn fault tolerance on? Here are the reasons for and against:


  • Enabling fault tolerance means a machine in your cluster can fail (see below) and you suffer no data loss and very limited service interruption (in the order of minutes).


  • Enabling fault tolerance slows scanning speed.
  • Enabling fault tolerance means you can store less data.

A “failing” machine means:

  • Hardware failure
  • Network failure
  • Power loss
  • Crash

With fault tolerance disabled, and assuming each machine in a cluster is identical, each machine added to a cluster increases its scanning speed. With fault tolerance enabled, the slow down in scanning speed is evident.


Number of hosts that can be scanned, scanning once per day

Cluster sizeFault tolerance disabledFault tolerance enabled


10,000 – 15,000



15,000 – 25,000

15,000 – 20,000


20,000 – 35,000

20,000 – 25,000


30,000 – 45,000

25,000 – 35,000


40,000 – 65,000

30,000 – 45,000


50,000 – 80,000

40,000 – 55,000

Consolidation does not change these figures – they apply whether you are scanning or consolidating. So, to consolidate 40,000 hosts a day, you need a cluster of 6 machines – just as if you want to scan 40,000 hosts a day.

Note that these figures are for a cluster running on reasonable hardware. It is possible to get the same performance with fewer machines by using more powerful hardware.

Previous recommendations

In previous releases, sizing guidelines were provided by defining classes of appliance deployment. The introduction of Big Discovery means that these classes are no longer valid for clustered deployments. They do still have some relevance for single appliance deployments, but where scale is required, just add hardware!

The appliance specification page in the UI retains the classes for standalone deployments, but not in clusters. We are reviewing BMC Discovery performance as version 10 is deployed, and might extend our recommendations based on our customer’s real experiences.

For more information on the previous recommendations, see Sizing guidelines in the version 9.0 documentation.

Was this page helpful? Yes No Submitting... Thank you
© Copyright 2004 - 2018 BMC Software, Inc.
Legal notices