Unsupported content This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Working with multiple vCenter deployments


The following sections offer best practices and procedures for environments that use multiple VMware vCenters:

Best practices for multiple vCenter deployments

Consider the following best practices if you have mulitiple VMware vCenters in the deployment (a multi-site deployment, for example):

  • Make sure you have the same VM template names available on all of the vCenters. You can then create Virtual Guest Packages (VGPs) from one vCenter and still provision VMs to other vCenters.
  • To minimize template overhead, it is highly recommended to keep templates as minimal as possible (such as, only OS with optionally an enterprise-based software stack, such as antivirus program).

How to provision servers on multiple vCenters

To be able to provision servers on multiple vCenters using BMC Cloud Lifecycle Management, complete the following steps in the identified component products.

Required BMC Network Automation pod configuration

To provision VMs on multiple vCenter servers, you can create the BMC network Automation Pod and Network Container in any of the following ways:

  • Using a common Pod and common network container.
  • Using a common Pod and separate network containers for each vCenter server.
  • Using multiple Pods and separate network containers for each vCenter server.

You can create a new Pod with nodes from multiple vCenter servers. If you already have a Pod for provisioning servers on a vCenter server, then you can add new vCenter server switches to the same Pod using the BMC Network Automation UI, as described in the following steps.

  1. Create devices for access switches from all vCenters, with the respective URLs. For example, if you are using vSwitch then add switches for each ESX from all vCenters. The following example shows vSwitch URLs from two different vCenter servers.
    vSwitch1@esxserver1@https://vCenterServer1
    vSwitch1@esxserver2@https://vCenterServer1
    vSwitch1@esxserver1@https://vCenterServer2
    vSwitch1@esxserver2@https://vCenterServer2
  2. Create or update a Pod to include switches from the multiple vCenter servsers (created in the previous step).
    1. To create a new Pod for multiple vCenter switches, use a Pod blueprint that has nodes for switches from all of the vCenters. See Creating a pod from a pod blueprint in the BMC Network Automation online technical documentation. (This step is the same as adding multiple vSwitches/ N1Kv/ DVS switches in order to provision on multiple clusters of one vCenter server.)
    2. To use an existing Pod, add switch nodes from the new vCenter servers to the Pod using the BMC network Automation UI. See Editing a pod in the BMC Network Automation online technical documentation.
  3. Create a network container blueprint that has nodes for all of the switches from all of the vCenter servers. (This step is the same as a container blueprint with multiple access switch nodes from one vCenter server; the only difference would be switch URLs in the devices). To use an existing network container, see step 5 in Required BMC Cloud Lifecycle Management configuration.

Required BMC Server Automation configuration

  1. Enrol all vCenter servers into BMC Server Automation. See Adding-the-vCenter-server-to-BMC-Server-Automation.
  2. Create a common VGP. See Creating-a-VGP-in-BMC-Server-Automation-for-a-vCenter-environment.
  3. Ensure that you have the same VM template names available on all of the vCenters. You can then create VGPs from one vCenter and still be able to provision VMs to other vCenters.

Required BMC Cloud Lifecycle Management configuration

  1. Do one of the following in the BMC Cloud Lifecycle Management Administration console:
  2. Onboard the clusters/hosts from all vCenters. See Onboarding-and-offboarding-compute-resources.
  3. Create compute pools and datastore pools. See Creating-resource-pools.
  4. Create a network container using a container blueprint that has nodes added for all of the access switches from all vCenter servers. See Creating-network-containers.
  5. You can also re-provision an existing network container and use it to provision on newly added vCenter server. To use an existing network container, complete the following steps:
    1. Create a new revision of a container blueprint with additional nodes for switches from the new vCenter server. See Creating-network-container-blueprints.
    2. Onboard the new revision of the container blueprint. See Importing-network-container-blueprints.
    3. Re-provision the network container. See Reprovisioning-network-containers.
    4. Once the re-provision operation completes, edit the network container and submit a modify operation with any changes. See Editing-network-containers. This action activates new switches on the existing network container.
  6. Map compute pools and tenants to the network container. See Mapping-resource-pools-to-network-containers.
  7. Create a service blueprint and service offering. See Building-service-blueprints and Creating-a-service-offering. (Optionally, you can use tags and policies to orchestrate provisioning requests to the vCenter servers.)

 

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*