Compacting the datastore

As the BMC Atrium Discovery datastore runs, many database entries are created and deleted. As this takes place, the database structure can become fragmented, meaning that there are 'gaps' in the data. This tends to make the database files grow over time, even if the amount of useful data remains constant.

The solution to this situation is to periodically compact the datastore using an offline copying compaction method. This compaction function defragments the databases, reclaiming the space wasted in the gaps.


Before compacting the datastore, back it up by performing an Appliance Snapshot. Failing to do so could result in data loss.

Running the utility

The tw_ds_compact utility compacts the datastore by copying the data files. The utility accesses the database files directly, outside the usual transactional environment. When you run the utility, you must choose a destination directory for the new database files.

Best Practice

Store the new databases on a different disk, to minimize thrashing between reading the old files and writing new ones. The new databases will generally be smaller than the originals, but you should ensure that there is at least the same amount of space as there is taken by the current databases.

For more information about the tw_ds_compact utility, examples of how to use the utility, and the options available for the command line, see tw_ds_compact.


Another script, tw_ds_online_compact, performs an 'on-line' compaction while the datastore is running. Bugs in the underlying Berkeley DB storage mean that the compaction may abort prematurely after filling the /usr partition or corrupt the data store. Do NOT use tw_ds_online_compact. Use tw_ds_compact to compact the datastore.

Was this page helpful? Yes No Submitting... Thank you