BMC Network Automation 8.3.01 supports the integration of Cisco Nexus 1000V switch and Microsoft Service Center Virtual Machine Manager (SCVMM). This topic contains the following sections:
Before you begin
Ensure that you have the following products installed with BMC CloudLifecycle Management 3.1.01:
- BMC Server Automation 8.3.00 hotfix #135
- BMC Network Automation 8.3.01 hotfix #6
To create a container using a Cisco Hyper-V N1KV switch
Create the pod in BMC Network Automation by using the following content:
HyperV-SamplePodBlueprint
<?xml version="1.0" encoding="UTF-8"?>
<bbnaData>
<version>
<build>26</build>
<lastUpgrader>3</lastUpgrader>
<maint>1</maint>
<major>8</major>
<minor>3</minor>
<patch>0</patch>
</version>
<podBlueprint>
<addressPoolBlueprints>
<addressPoolBlueprint>
<linkId>0</linkId>
<name>Management</name>
<natPoolName></natPoolName>
<defaultPublicFlag>false</defaultPublicFlag>
<defaultShareableFlag>false</defaultShareableFlag>
</addressPoolBlueprint>
</addressPoolBlueprints>
<addressRangeBlueprints>
<addressRangeBlueprint>
<defaultPoolMask xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">24</defaultPoolMask>
<defaultPublicFlag>false</defaultPublicFlag>
<name>Data</name>
</addressRangeBlueprint>
</addressRangeBlueprints>
<balancedParamBlueprints/>
<integerPoolBlueprints/>
<legacyVersion>8.3.01</legacyVersion>
<name>Sample Pod Blueprint - Hyper-V N1KV </name>
<nicSegmentBlueprints>
<nicSegmentBlueprint>
<defaultEnabledFlag>true</defaultEnabledFlag>
<lockedFlag>true</lockedFlag>
<name>Management</name>
<networkName>Management</networkName>
<addressPoolName>Management</addressPoolName>
<vlanName>Management</vlanName>
<customerFlag>false</customerFlag>
<managementFlag>true</managementFlag>
</nicSegmentBlueprint>
</nicSegmentBlueprints>
<nodeBlueprints>
<nodeBlueprint xsi:type="podHypervisorSwitchBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<balancedParamBlueprints/>
<category>2</category>
<defaultShareableFlag>false</defaultShareableFlag>
<name>Access</name>
<optionalFlag>false</optionalFlag>
<paramBlueprints/>
<role>Access</role>
<portTypeBlueprints>
<portTypeBlueprint>
<name>Management</name>
<nameWithinSwitch>Management</nameWithinSwitch>
<nicSegmentName>Management</nicSegmentName>
</portTypeBlueprint>
</portTypeBlueprints>
</nodeBlueprint>
<nodeBlueprint xsi:type="podHypervisorSwitchBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<balancedParamBlueprints/>
<category>2</category>
<defaultShareableFlag>false</defaultShareableFlag>
<name>HyperV</name>
<optionalFlag>false</optionalFlag>
<paramBlueprints>
<paramBlueprint>
<description>Rogue device address</description>
<name>ROGUE_DEVICE_ADDRESS</name>
</paramBlueprint>
</paramBlueprints>
<role>Hyper</role>
<portTypeBlueprints>
<portTypeBlueprint>
<name>Management</name>
<nameWithinSwitch>Management</nameWithinSwitch>
<nicSegmentName>Management</nicSegmentName>
</portTypeBlueprint>
</portTypeBlueprints>
</nodeBlueprint>
</nodeBlueprints>
<pairBlueprints/>
<paramBlueprints>
<paramBlueprint>
<description>Pod HyperV N1KV Uplink</description>
<name>Pod Uplink</name>
</paramBlueprint>
</paramBlueprints>
<vlanBlueprints>
<vlanBlueprint>
<vlanName>Management</vlanName>
<vlanPoolName>Management</vlanPoolName>
</vlanBlueprint>
</vlanBlueprints>
<vlanPoolBlueprints>
<vlanPoolBlueprint>
<defaultEndNum>80</defaultEndNum>
<defaultStartNum>21</defaultStartNum>
<name>Data</name>
</vlanPoolBlueprint>
<vlanPoolBlueprint>
<defaultEndNum>11</defaultEndNum>
<defaultStartNum>11</defaultStartNum>
<name>Management</name>
</vlanPoolBlueprint>
</vlanPoolBlueprints>
</podBlueprint>
</bbnaData>
Use the following templates to configure the Cisco Hyper-V N1KV switch.
The [expand] macro is a standalone macro and it cannot be used inline.
The template creates the following components. In the following examples, container-name = myContainer-HyperV and container.vlan[AccessA] = 551:
| | |
---|
| ${container.name}-Vlan${container.vlans[AccessA]}-IpPool | myContainer-HyperV-Vlan551-IpPool |
| ${container.name}-Vlan${container.vlans[AccessA]} | myContainer-HyperV-Vlan551 |
| ${container.name}-Vlan${container.vlans[AccessA]}-NetSegmentPool | myContainer-HyperV-Vlan551-NetSegmentPool |
| ${container.name}-Vlan${container.vlans[AccessA]}-NetSegment | myContainer-Vlan551-NetSegment |
Create the container in BMC Clod Lifecycle Management by using the following content:
HyperV-SampleContainerBlueprint
<?xml version="1.0" encoding="UTF-8"?>
<bbnaData>
<version>
<build>26</build>
<lastUpgrader>3</lastUpgrader>
<maint>1</maint>
<major>8</major>
<minor>3</minor>
<patch>0</patch>
</version>
<containerBlueprint>
<addressBlueprints/>
<addressPoolBlueprints>
<addressPoolBlueprint>
<linkId>0</linkId>
<name>AccessA</name>
<rangeBlueprintName>Data</rangeBlueprintName>
</addressPoolBlueprint>
</addressPoolBlueprints>
<addressSpaceBlueprints/>
<externalNetworkSegmentBlueprints>
<externalNetworkSegmentBlueprint>
<defaultEnabledFlag>true</defaultEnabledFlag>
<lockedFlag>false</lockedFlag>
<name>External</name>
<networkName>External</networkName>
<tag>Purpose[External]</tag>
<defaultNetworkAddress xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">0.0.0.0</defaultNetworkAddress>
<defaultNetworkMask xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">0</defaultNetworkMask>
</externalNetworkSegmentBlueprint>
</externalNetworkSegmentBlueprints>
<integerBlueprints/>
<legacyVersion>8.3.01</legacyVersion>
<name>Sample Container Blueprint - HyperV N1KV</name>
<networkPathBlueprints>
<networkPathBlueprint>
<endpoint1Name>External</endpoint1Name>
<endpoint2Name>AccessA</endpoint2Name>
<name>External-AccessA</name>
<servicedNodeNames/>
</networkPathBlueprint>
</networkPathBlueprints>
<nicSegmentBlueprints>
<nicSegmentBlueprint>
<defaultEnabledFlag>true</defaultEnabledFlag>
<lockedFlag>true</lockedFlag>
<name>AccessA</name>
<networkName>AccessA</networkName>
<tag>Purpose[Web]</tag>
<addressPoolName>AccessA</addressPoolName>
<vlanName>AccessA</vlanName>
<customerFlag>true</customerFlag>
<managementFlag>false</managementFlag>
</nicSegmentBlueprint>
</nicSegmentBlueprints>
<nodeBlueprints>
<nodeBlueprint xsi:type="containerHypervisorSwitchBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<addressBlueprints>
<addressBlueprint>
<addressName>SecondIpAddressInPool</addressName>
<gatewayFlag>false</gatewayFlag>
<poolPosition>2</poolPosition>
<addressPoolName>AccessA</addressPoolName>
</addressBlueprint>
</addressBlueprints>
<category>2</category>
<configureActionInfoBlueprints>
<configureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
<templateGroups>
<item>SampleHyperVConfigureAccess</item>
</templateGroups>
</configureActionInfoBlueprint>
</configureActionInfoBlueprints>
<dummyHostFlag>false</dummyHostFlag>
<name>Access</name>
<numVrfs>0</numVrfs>
<role>Access</role>
<unconfigureActionInfoBlueprints>
<unconfigureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
<templateGroups>
<item>SampleHyperVUnconfigureAccess</item>
</templateGroups>
</unconfigureActionInfoBlueprint>
</unconfigureActionInfoBlueprints>
<portTypeBlueprints>
<portTypeBlueprint>
<name>AccessA</name>
<nameWithinSwitch>${container.node.portTypes[AccessA].vlan}</nameWithinSwitch>
<nicSegmentName>AccessA</nicSegmentName>
</portTypeBlueprint>
</portTypeBlueprints>
</nodeBlueprint>
<nodeBlueprint xsi:type="containerHypervisorSwitchBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<addressBlueprints/>
<category>2</category>
<configureActionInfoBlueprints/>
<dummyHostFlag>false</dummyHostFlag>
<name>Hyper</name>
<numVrfs>0</numVrfs>
<role>Hyper</role>
<unconfigureActionInfoBlueprints/>
<portTypeBlueprints>
<portTypeBlueprint>
<name>AccessA</name>
<nameWithinSwitch>Nexus1000V</nameWithinSwitch>
<nicSegmentName>AccessA</nicSegmentName>
</portTypeBlueprint>
</portTypeBlueprints>
</nodeBlueprint>
</nodeBlueprints>
<pairBlueprints/>
<revisionNum>0</revisionNum>
<vipSegmentBlueprints/>
<vlanBlueprints>
<vlanBlueprint>
<vlanName>AccessA</vlanName>
<vlanPoolName>Data</vlanPoolName>
</vlanBlueprint>
</vlanBlueprints>
<vrfIdBlueprints/>
<zoneBlueprints/>
</containerBlueprint>
</bbnaData>
Back to top
To view the logical network created during container provisioning on SCVMM
Refresh the virtual switch extension manager for the N1KV switch to see the logical network on SCVMM by performing the following steps:
- RDC to the SCVMM server.
- Connect the SCVMM Console to localhost:8100.
- Click Fabric or Ctrl-Fon the lower left navigation pane.
- On the upper left navigation pane, go to Fabric > Networking > Switch Extension Managers, click the host name specified for the Nexus1000V switch, and then click Refresh on the ribbon or right-click and select Refresh on the context menu.
- Go to Fabric > Networking > Logical Networks and verify that myContainer-HyperV-Vlan551, the logical network created during container provisioning is displayed.
- Select the logical network, right-click, and then select Properties.
- Click Network Site.
You should see the network segment pool as a network site and it shouild have an association with tthe VLANs and subnets.
Note
BMC Cloud Lifecycle Management does not support a standard switch. Therefore, when creating a virtual switch for management network, it must always point to a logical switch.
Back to top
To create a VM network from SCVMM
- Click VMs and Services or Ctrl-M on the lower left navigation pane.
- Click Create VM Network on the ribbon.
- On the Jobs wizard that pops up, perform the following actions:
- Specify a name for the VM network, for example, myContainer-HyperV.
- Select the logical-network that was created during container creation, for example, myContainer-HyperV-Vlan551
- Click Next.
- Set Isolation to Specify an externally supplied VM Network.
By default, External VM network is set to myContainer-HyperV-Vlan551-NetSegment and is the only option. - Click Next, and then click Finish.
- Close the popup window and verify that the VM network is displayed on the list.
Back to top
Creating network pod blueprintsCreating network container blueprints
Onboarding-Microsoft-Hyper-V-resources