Page tree

Skip to end of metadata
Go to start of metadata

Managing compute platforms requires defining which data sources can be leveraged for supporting capacity management use cases.

This table provides a list of the recommended options for extracting data into BMC TrueSight Capacity Optimization, as either provided by out-of-the-box ETL modules of by ETL modules provided by Partner add-ons (notice this second option requires a separate license).

Platform classificationExamplesOptions for extracting data
AIX on IBM POWER (PowerVM hypervisor)Any POWER machine with at least one AIX partition

BMC TrueSight Capacity Optimization agents

BMC TrueSight Operations Management

Solaris in any LDOM/Zone/DSDAny Solaris virtualization technology

BMC TrueSight Capacity Optimization agents

BMC TrueSight Operations Management

HP IntegrityHP Integrity serverBMC TrueSight Capacity Optimization agents
HP-UX virtualizationHP nPar, HP vParBMC TrueSight Capacity Optimization agents
Xen hypervisorAny supported Linux distro running Xen

BMC TrueSight Capacity Optimization agents

Citrix XenServer Extractor

Microsoft Hyper-VMicrosoft Hyper-V

BMC TrueSight Capacity Optimization agents

Microsoft SCOM ETL module from Moviri Integrator for TrueSight Capacity Optimization add-on (notice: this second option requires a separate license) - see Market Zone Direct ETL modules

KVMAny supported Linux distro running KVMBMC TrueSight Capacity Optimization agents
VMware vSphere (ESXi and vCenter)vCentervCenter Extractor Service
Any supported OS running directly on a "standalone" x86 based machineLinux or Windows

BMC TrueSight Capacity Optimization agents

BMC TrueSight Operations Management (on supported platforms)

Any supported OS running on an x86 based hypervisorLinux or WindowsBMC TrueSight Capacity Optimization agents
Any supported OSN/Aother options provided by ETL modules that integrate management tools from 3rd-party vendors such as those provided by Moviri Integrator for TrueSight Capacity Optimization add-on (notice: this second option requires a separate license) - see Market Zone Direct ETL modules

vSphere

With BMC TrueSight Capacity Optimization, you can manage the capacity of vSphere running business workloads as virtual machines (VMs). There are two levels of capacity management for vSphere alone:

  • Managing the capacity of the infrastructure, including the VMware ESXi hosts, storage, and networks.
  • Managing the capacity of an individual virtual machine running a business workload.

In addition, vSphere capacity can be managed in context of a cloud management system:

  • In the context of VMware vCloud Director
  • In the context of CLM.

Below, we consider each of these levels of capacity management separately.

Managing the capacity of the vSphere infrastructure

Benefits of using BMC TrueSight Capacity Optimization to manage the capacity of the vSphere infrastructure

BMC TrueSight Capacity Optimization lets you visualize the capacity of the ESXi hosts, clusters, resource pools, storage, and networks. Considering the clusters as containers of resources, BMC TrueSight Capacity Optimization tells you the spare capacity you have available to support additional VMs. It tells you which resource (CPU, memory, disk space, etc.) is limiting the capacity. BMC TrueSight Capacity Optimization also tells you how many days you have before a container reaches saturation, using the trends in the data. For more information, see Using Out-of-the-box Views: vSphere View.

How to manage the capacity of your vSphere infrastructure

You collect data from vSphere using the OOTB vCenter extractor service. The connector loads configuration and performance information on all (or a subset of) the entities in a vCenter into the BMC TrueSight Capacity Optimization DWH. BMC TrueSight Capacity Optimization contains built-in analysis and model templates, reports, and views, which let you answer questions about spare capacity, limiting resources, and days to saturation.

The vCenter extractor service creates BMC TrueSight Capacity Optimization entities of types "Virtual Host - VMware" and "Virtual Machine - VMware" respectively for the ESXi hosts and VMs.

Deploying and Configuring the vCenter extractor service

A single vCenter extractor service can collect data continuously either from one entire vCenter, or from a sub-set of a vCenter. To deploy multiple extractors against a single vCenter, use the "extract clusters list" option when configuring each extractor, so that it will load only the entities (hosts, virtual machines, datastores, and resource pools) related to a sub-set of the clusters in the vCenter. For more information, see the following topics:

Sizing the vCenter extractor service

BMC TrueSight Capacity Optimization lets you manage very large vSphere environments containing multiple vCenters. The BMC TrueSight Capacity Optimization DWH and Application Server must be sized appropriately for the entire environment, including all the data expected to be loaded every day. For general information on sizing, see Sizing and scalability considerations.

The vCenter extractor service is a "service connector". When sizing the ETL Engines, follow the guidelines for service connectors. If you have a very large vCenter, you may have to deploy multiple extractors against it. For more information, see Sizing considerations for ETL Engine servers.

Specifically for the vCenter extractor service, the main drivers for sizing are:

  • the number of VMs, and
  • the number of clusters managed.

The following rules of thumb can be used:

  • No more than 2,000 VMs per connector.
  • Scheduler heap: 2 GB per 2,000 VMs
  • Data storage on the ETL Engine machine: 10 GB per 2,000 VMs.

For heavily populated clusters, the 2,000 VMs in the above rules can be equated to a single cluster. But if there are many small clusters, then it is important to note that the number of clusters affects the number of threads used by the vCenter extractor service; allow enough CPU resources to ensure timely polling of data by limiting the number of clusters per vCenter extractor.

How to split a vCenter among multiple vCenter extractors

When configuring the vCenter extractor service, it is possible to specify two key advanced properties:

  • Extract clusters list: specify a list of clusters whose data is to be extracted.
  • Blacklist file path: specify a file containing a list of clusters whose data is not to be extracted.

(See the configuration details in the topic for continuous extraction of vCenter data).

Using the white list and blacklist file, you can distribute the clusters of a single vCenter among multiple vCenter extractors, either running the extractors on the same ETL Engine, or on different ETL Engines, depending upon the sizing constraints. A standardized way of using the lists among a set of N extractors is: use white lists for (N-1) extractors, and use a black list in the last extractor so that it will automatically pick up any newly added clusters in the vCenter.

Relationship with storage (requires Sentry integration version 3.5 or later)

Sometimes it is necessary to map datastores to their backing storage volumes. For VMFS datastores backed by shared LUNs, it is possible to link storage data extracted for the underlying LUNs with the datastores.

The vCenter Extractor Service loads the config metric “DSTORE_DEVICE”, which is the external “naa” ID of the first backing block storage volume. For more information, see Datastore config metrics.

For more details, see Managing Storage environments.

Managing the capacity of an individual virtual machine

For managing the capacity of a virtualized system running inside an individual VM, you need the "inside view" of the performance of a VM. Just like any server running on physical hardware with an operating system, the virtual node runs processes that do useful work. Performance data extracted from this system using an Agent can be extracted and optionally used to define Gateway Server workloads. We call this kind of system in BMC TrueSight Capacity Optimization a "virtual node". It is distinct from the system that represents the VM as seen by the hypervisor; BMC TrueSight Capacity Optimization represents the two using two distinct System Types.

BMC TrueSight Capacity Optimization lets you collect data directly from the operating system (Windows, or Linux) using Gateway Server data collection capabilities and OOTB BMC TrueSight Capacity Optimization connectors to Gateway Server. You deploy the Gateway Server data collection components, then use one of the OOTB Gateway Server connectors to load the results into the BMC TrueSight Capacity Optimization DWH. The connector loads configuration and performance information on any of these virtualized systems. BMC TrueSight Capacity Optimization contains built-in analysis and model templates, reports, and views, which let you answer questions about spare capacity, limiting resources, and days to saturation.

For new installations, we recommend the "Virtual Nodes vis file parser" connector. For more information, see BMC - TrueSight Capacity Optimization Gateway VIS files parser.

For existing Gateway Server installations, it may be convenient to use the BMC - TrueSight Capacity Optimization CDB-CDB extractor instead.

For details about how to configure the Gateway Server data collection capabilities including agents and BMC TrueSight Capacity Optimization, see the section on Gateway Server data collection below.

AIX on IBM Power Series

IBM Power Series (POWER5, POWER6, POWER7) hosts are large servers, sometimes called "frames", that support virtualized "partitions" that run operating systems. BMC TrueSight Capacity Optimization lets you collect data from these servers in order to manage capacity.

Benefits of using BMC TrueSight Capacity Optimization to manage the capacity of your IBM Power Series environment

An IBM Power Series host can be partitioned into logical partitions (LPARs) in several ways: in dedicated mode, processors are assigned entirely to partitions; in shared dedicated mode, partitions may "donate" their spare CPU cycles to others; in shared mode, fractions of processing units are assigned as "entitlements" from a shared pool. The operating system (AIX, IBM i, or Linux) running within the LPAR may further perform workload partitioning. Meanwhile, certain special partitions are dedicated to virtual I/O; these partitions, called VIO Servers, manage physical storage and network resources and mediate access to these by other partitions. The end result of these partitioning schemes is that the CPU, memory, storage, and network resources managed by the Power Series host are used by the business workloads.

BMC TrueSight Capacity Optimization lets you visualize the capacity of the Power Series hosts (servers) and their VIO partitions. Considering the hosts as containers of resources, BMC TrueSight Capacity Optimization tells you the spare capacity you have available to support additional LPARs. It tells you which resource (CPU, memory, disk space, etc.) is limiting the capacity. BMC TrueSight Capacity Optimization also tells you how many days you have before a container reaches saturation, using the trends in the data. For more information, see Out-of-the-box views: AIX View.

Options for collecting data about IBM Power Series frames

The Hardware Management Console (HMC) is a separate management station for administering a number of IBM Power frames. An HMC is required in any substantial Power series installation.

An Agent can collect configuration and performance data either only from the AIX operating system within the partition it is running on, or both from the AIX partition and from the HMC. The second method provides data about the entire frame and all the partitions in it.

We describe these options in more detail below.

The following figure illustrates an example of three Power series frames and one HMC. The HMC is a separate machine, typically a rack mounted server. The three instances of Power Series frames can be either rack servers or blade servers. Not shown in the figure are any direct-attached data storage drives or SAN based storage.

The Power Series frames contain the built-in PowerVM hypervisor, and the administrator can create a number of logical partitions. These partitions can either be virtual I/O server partitions (VIO), which run a specialized version of AIX and manage I/O, or "client partitions", which in turn run an operating system and business applications. We show nine LPARs in the diagram, three in each frame.

HMConly

The above figure shows Agents installed on some of the partitions (LPARs 1-4 and 7) in order to collect data. Agents can be installed either on VIO server partitions or on AIX client partitions. In either case, we refer to the agent and collector as AIX agent and AIX collector.

Collecting data about IBM Power Series frames from HMC

The HMC has embedded software that does not support agents or other third-party additions. It allows third-party software like BMC TrueSight Capacity Optimization to collect data from it remotely by issuing commands using SSH. The HMC provides configuration data and some utilization related data about the frames and the partitions running on the frames.

Agents can be configured to periodically collect HMC data. The agent must be installed on any AIX or VIO Server LPAR with network access to the HMC. In the above figure, an Agent is installed on LPARs 1-4 and 7. The agents on LPARs 4 and 7 are each configured to connect to the HMC and collect data from it.

When collecting HMC data, an agent collects HMC data only for its own frame. Thus, each frame needs at least one AIX agent running on it. For more information, see Configuration requirements for collecting data from the HMC.

Collecting data from AIX running inside a partition

Agents can be installed on any AIX partition. In the above figure, LPAR 1 is an example of such a partition. The BMC TrueSight Capacity Optimization UI refers to such partitions as "instrumented" partitions. The data for Frames 2 and 3 comes almost entirely from the HMC, while the data for Frame 1 comes entirely from instrumented partitions in the frame.

Advantages of full instrumentation

BMC TrueSight Capacity Optimization can extract certain metrics only from instrumented partitions. Also, certain data about shared processor pools is possible to obtain correctly only when all of the contained partitions are instrumented. The trade-off is that in order to fully instrument the partitions, you need to install agents on each partition, remembering to install the agent on any new partition. In return for this complexity, you get two advantages:

  • A more complete list of metrics collected from inside each LPAR. These allow you to manage specific AIX or POWER features like AME, AMS, etc., as well as accurate utilization information for memory and storage.
  • Process-level data, and the ability to define business-specific workloads in Gateway Server and import them into BMC TrueSight Capacity Optimization.

Non-instrumented partitions

All the data for non-instrumented partitions comes only from the HMC. This data measures allocation of CPU and memory to the partitions, and CPU utilization. Data about shared processor pools is limited to allocated resources.

OOTB views for different levels of instrumentation

For each LPAR, BMC TrueSight Capacity Optimization reports whether it is instrumented or not. For each frame, BMC TrueSight Capacity Optimization also reports whether the frame is fully instrumented or not.

The same OOTB Views for AIX can be used whether a frame is fully instrumented or not.

  • The "Allocated Capacity" view and its associated links show metrics that do not need full instrumentation.
  • The "Used Capacity" view and its associated links show metrics that need full instrumentation.

There is a view setting "HMC-only" to turn off those columns that require full instrumentation. You can use this setting if your frames are relying mostly on HMC data collection.