Working with multiple vCenter deployments
The following sections offer best practices and procedures for environments that use multiple VMware vCenters:
Best practices for multiple vCenter deployments
Consider the following best practices if you have mulitiple VMware vCenters in the deployment (a multi-site deployment, for example):
- Make sure you have the same VM template names available on all of the vCenters. You can then create Virtual Guest Packages (VGPs) from one vCenter and still provision VMs to other vCenters.
- To minimize template overhead, it is highly recommended to keep templates as minimal as possible (such as, only OS with optionally an enterprise-based software stack, such as antivirus program).
How to provision servers on multiple vCenters
To be able to provision servers on multiple vCenters using BMC Cloud Lifecycle Management, complete the following steps in the identified component products.
Required BMC Network Automation pod configuration
To provision VMs on multiple vCenter servers, you can create the BMC network Automation Pod and Network Container in any of the following ways:
- Using a common Pod and common network container.
- Using a common Pod and separate network containers for each vCenter server.
- Using multiple Pods and separate network containers for each vCenter server.
You can create a new Pod with nodes from multiple vCenter servers. If you already have a Pod for provisioning servers on a vCenter server, then you can add new vCenter server switches to the same Pod using the BMC Network Automation UI, as described in the following steps.
- Create devices for access switches from all vCenters, with the respective URLs. For example, if you are using vSwitch then add switches for each ESX from all vCenters. The following example shows vSwitch URLs from two different vCenter servers.
Create or update a Pod to include switches from the multiple vCenter servsers (created in the previous step).
To create a new Pod for multiple vCenter switches, use a Pod blueprint that has nodes for switches from all of the vCenters.Seein the BMC Network Automation online technical documentation. (This step is the same as adding multiple vSwitches/ N1Kv/ DVS switches in order to provision on multiple clusters of one vCenter server.)
To use an existing Pod, add switch nodes from the new vCenter servers to the Pod using the BMC network Automation UI.See Editing a pod in the BMC Network Automation online technical documentation.
- Create a network container blueprint that has nodes for all of the switches from all of the vCenter servers. (This step is the same as a container blueprint with multiple access switch nodes from one vCenter server; the only difference would be switch URLs in the devices). To use an existing network container, see step 5 in Required BMC Cloud Lifecycle Management configuration.
Required BMC Server Automation configuration
- Enrol all vCenter servers into BMC Server Automation. See Adding the vCenter server to BMC Server Automation.
- Create a common VGP. See Creating a VGP in BMC Server Automation for a vCenter environment.
- Ensure that you have the same VM template names available on all of the vCenters. You can then create VGPs from one vCenter and still be able to provision VMs to other vCenters.
Required BMC Cloud Lifecycle Management configuration
- Do one of the following in the BMC Cloud Lifecycle Management Administration console:
- Onboard the clusters/hosts from all vCenters. See Onboarding and offboarding compute resources.
- Create compute pools and datastore pools. See Creating resource pools.
- Create a network container using a container blueprint that has nodes added for all of the access switches from all vCenter servers. See Creating network containers.
- You can also re-provision an existing network container and use it to provision on newly added vCenter server. To use an existing network container, complete the following steps:
- Create a new revision of a container blueprint with additional nodes for switches from the new vCenter server. See Creating network container blueprints.
- Onboard the new revision of the container blueprint. See Importing network container blueprints.
- Re-provision the network container. See Reprovisioning network containers.
- Once the re-provision operation completes, edit the network container and submit a modify operation with any changes. See Editing network containers. This action activates new switches on the existing network container.
- Map compute pools and tenants to the network container. See Mapping resource pools to network containers.
- Create a service blueprint and service offering. See Building service blueprints and Creating a service offering. (Optionally, you can use tags and policies to orchestrate provisioning requests to the vCenter servers.)