Network issues

This section contains troubleshooting information about network pods and containers using BMC Network Automation.

Service Designer does not load

If the Service Designer does not load and the loading icon continues to spin, make sure that the firewall is open for ports 8080 and 9000. For more information about ports, see Port Mappings.

Troubleshooting an error about required fields for an AWS logical datacenter

The following BMC Communities video (4:08) describes how to troubleshoot an error about required fields for an AWS logical datacenter.

https://youtu.be/GsGgiEv9zS0

Troubleshooting compatibility issues between a pod and a network container blueprint

The following BMC Communities video (2:46) describes how to troubleshoot compatibility issues between a BMC Network Automation pod and a network container blueprint. A wsdl file is used in the process.

https://youtu.be/ydPHgZPr2OU

Job fails to configure or unconfigure a container

The main problem that requires troubleshooting in Pod and Container Management (PCM) testing is the failure of the job to configure or unconfigure a container. To troubleshoot job failures, go to the job list page in the BMC Network Automation user interface and select the job which failed, then inspect the action(s) within the job that failed and inspect the contents of the ad hoc template being used by each Deploy to Active action.

Jobs to configure or unconfigure containers are system-generated jobs, not user-generated jobs, so you may need to change your job list filter in order to see them listed. The name of the container being configured or unconfigured is embedded in the Description dynamic field of the container. You can enable Description as a filterable field for jobs if desired, to make them easier to find. The view page for a container also presents the ID of the jobs last used to configure or unconfigure a container.

If the job to configure a container fails, BMC Network Automation launches a job to automatically unconfigure the container. If the job to unconfigure a container is successful, the container is automatically deleted, which frees its resources back to the pod and to Internet Protocol Address Management (IPAM). If the job to unconfigure a container fails, the container is not deleted. You must manually delete the container from the Container listing page and manually unconfigure the devices which BMC Network Automation failed to unconfigure.

Back to top

Container creation fails with a CapacityExceededException

One type of failure possible when creating a container on a pod is a CapacityExceededException due to an address pool being too small. Each address pool reserved by a container from an address range in the pod must be large enough to allow the container to acquire all the addresses from it which the container blueprint requires. The number of addresses which can be acquired from an address pool is two less than the size of the pool as defined by its mask. The reason it is two less than the pool size is because the first and last addresses within the pool cannot be used. The first address will be all zeros and the last address will be all ones within the subnet in question, both of which are reserved addresses used for broadcasting within the subnet. For example, if you define an address range with a mask for its pools of 255.255.255.248, that corresponds to a range in which each pool is of size eight, which means that a given pool could be used to acquire a maximum of six addresses for a given container before being exhausted.

Back to top

Using simulated mode to test a container blueprint

When you are first testing out a new container blueprint, it is helpful to use simulated mode, so that you do not have to worry about device state and the correctness of your template commands initially while you work the kinks out of your substitution parameters. To use simulated mode, perform the following steps:

  1. Set the simulateConnection property true in the global.properties.imported file.
  2. Restart BMC Network Automation.
  3. In the BCAN_DATA\devices directory, create *.running, *.startup, and *.image text files for each device in your pod, where the base name of the file corresponds to the address of the device.

    The *.running and *.startup files hold the contents of the running and startup configurations for a device. The *.image file holds information about the OS image present on the device. Each of these files can just contain a line that says dummy for the purposes of PCM testing.

If your container blueprint contains fault-tolerant Firewall (FW) or Loadbalancer (LB) hosts, you can simulate inspect-fault-host custom action results by populating a BCAN_DATA\inspectFaultHost.properties file. See the following example for more details:
Example *inspectFaultHost.properties* file
#The following properties are read from inspectFaultHost.properties to make the
#Inspect fault host custom action return specified property values for fwsm or ace
#device.
#
#1. firewall.host1
#2. firewall.host2
#
#The above 2 properties take FWSM devices which participate in pair. Further
#host1 / host2 can allow multiple address names comma separated, e.g. FirewallA1,FirewallA2
#This is the case in large gold container where firewall host pairs are assigned in
#round robin.
#
#3. loadbalancer.host1
#4. loadbalancer.host2
#
#The above 2 properties take ACE devices which participate in pair. Further host1 / host2
#can allow multiple address names comma separated, e.g. LoadBalancerHost1,LoadBalancerHost2
#This is the case in large gold container where firewall host pairs are assigned in
#round robin.
#
#5. firewall.host1.adminFailoverGroup
#
#The above property controls the failover group returned by FWSM host (1/2/null). Combination
#of failover group and community1 / community2 active flag value will be used to determine
#adminActiveFlag property of firewall host.
#
#6. firewall.host1.faultCommunity1ActiveFlag
#
#The above property controls the faultCommunity1Activeflag for FWSM host1 to be returned by
#simulated custom action, for e.g. (Active / Standby)
#
#7. firewall.host1.faultCommunity2ActiveFlag
#
#The above property controls the faultCommunity2ActiveFlag for FWSM host1 to be returned by
#simulated custom action, for e.g. (Active / Standby)
#
#8. firewall.host2.adminFailoverGroup
#
#The above property controls the failover group returned by FWSM host (1/2/null). Combination
#of failover group and community1 / community2 active flag value will be used to determine
#adminActiveFlag property of firewall host.
#
#9. firewall.host2.faultCommunity1ActiveFlag
#
#The above property controls the faultCommunity1Activeflag for FWSM host2 to be returned by
#simulated custom action, for e.g. (Active / Standby)
#
#10. firewall.host2.faultCommunity2ActiveFlag
#
#The above property controls the faultCommunity2Activeflag for FWSM host2 to be returned by
#simulated custom action, for e.g. (Active / Standby)
#
#11. loadbalancer.host1.adminActiveFlag=true
#
#The above property controls the adminActiveFlag for ACE host1 to be returned by simulated
#custom action, for e.g. (true / false).
#
#12. loadbalancer.host2.adminActiveFlag=false
#
#The above property controls the adminActiveFlag for ACE host2 to be returned by simulated
#custom action, for e.g. (true / false).
#
#Suppose Pod has following ACE fault pairs LoadBalancerHost1, LoadBalancerHost2
#and following FWSM pairs FirewallHostA1 and FirewallHostA2 and we want make pair nodes look like
#below
# FirewallHostA1 = Admin
# FirewallHostA1 = fault comunity 1 active
# FirewallHostA2 = fault community 2 active
# LoadBalancerHost1 = Admin
# the 12 properties would look like below
firewall.host1=FirewallHostA1
firewall.host2=FirewallHostA2
loadbalancer.host1=LoadBalancerHost1
loadbalancer.host2=LoadBalancerHost2
firewall.host1.adminFailoverGroup=1
firewall.host1.faultCommunity1ActiveFlag=Active
firewall.host1.faultCommunity2ActiveFlag=Standby
firewall.host2.adminFailoverGroup=
firewall.host2.faultCommunity1ActiveFlag=Standby
firewall.host2.faultCommunity2ActiveFlag=Active
loadbalancer.host1.adminActiveFlag=true
loadbalancer.host2.adminActiveFlag=false 

When you are ready to move on to testing out container creation on real devices, one recommended way to test out the connectivity of network paths within a container is to connect to each in device in the container and verify you can ping the VLAN interface address of each of the other devices that are supposed to be connected to it within the container.

Remember to use the view pages to inspect the state of your pod and container after you create them, to make sure that resources are in the state that you expect them to be in. You can also inspect the UI of the IPAM system to make sure that its resources are also in the state that you expect them to be in.

Back to top

Troubleshooting templates used during container configuration and unconfiguration

When you are troubleshooting or developing templates to be used during container configuration and unconfiguration, set the vdcCommitContainerActions property to false in the global.properties file. This property is set to true by default. Setting it to false prevents BMC Network Automation from committing any configuration changes to the devices involved. Any configuration changes you make can be rolled back by restarting the devices.

Back to top

Troubleshooting "Unable to reach CMDB. Please verify the credentials" error received during POD configuration

When creating a pod in BMC Network Automation, you might receive an "Unable to reach CMDB. Please verify the credentials" error. If this occurs, perform the following steps in order until you reach an action that resolves the problem:

  1. Check if the Site has been created in the BMC.CORE:BMC_PhysicalLocation form on the BMC Remedy AR System server. The error message can be displayed if the record does not exist in this form.
    To create the record, please follow the steps as described in Creating a physical location for a pod.
  2. If the Site has been created in the PhysicalLocationform and the problem persists, then verify Tomcat is running properly:
    1. Enter the URL http://AtriumWebServicesServer:portNumber in a browser.
      If Tomcat is running properly, an Apache screen is displayed.
    2. If the Apache screen is not displayed then restart Tomcat. If the problem still persists, check for errors in Tomcat logs typically located in the AtriumWebServiceDirectory\BMC Software\Atrium Web Registry\shared\tomcat\logs directory.
  3. If Tomcat is working, then verify the Atrium Web Services are working properly:
    1. Enter the Web Services URL http://AtriumWebServicesServer:portNumber/cmdbws/server/cmdbws in a browser.
      The result should be "Please enable REST support in WEB-INF/conf/axis2.xml and WEB-INF/web.xml".
    2. If nothing is returned, then it’s an issue with Atrium Web Services. Verifying the Atrium Web Services logs (typically located in the AtriumWebServiceDirectory\BMC Software\Atrium Web Registry\Logs folder) will help to narrow down the issue.
    3. If the page is loading fine, then settings in the BMC Network Automation console should be verified (Logon to BNA console and from Admin Tab > System Admin > System Parameters > External Integrations, verify the settings for "Enable CMDB Integration")
  4. Verify the settings on the BMC Network Automation Server for the BMC Atrium CMDB Integration:
    1. “Enable CMDB Integration” should be checked under the External Integration section. For BMC Cloud Lifecycle Management usecases, the "Enable CMDB Integration" parameter is used. The "Web Services Registry Integration" parameter should be kept disabled.
    2. Verify that the ‘Web Service Endpoint URL’ is correct. The format is http://AtriumWebServicesServer:portNumber/cmdbws/server/cmdbws.
    3. Enter the Username and Password. Ensure you can log in to the BMC Remedy AR System server with the same username and password. BMC recommends that you use "Demo" credentials for this step.
  5. If the credentials  are correct and the error is thrown, verify the cmdbws.properties file, typically located in the AtriumWebServiceDirectory\BMC Software\Atrium Web Registry\wsc\cmdbwsfolder.
    1. Verify that the hostname in the file is the hostname for the BMC Remedy AR System Server.
      The hostname should not be localhost, unless Atrium Web Services Registry Component is installed on the BMC Remedy AR System Server.
    2. If the hostname must be changed, after changing the hostname in the file restart the Tomcat service on the server.
      If the Atrium Web Service is running on bundled Tomcat, you can use the Tomcat startup and shutdown scripts that are present in the AtriumWebServiceDirectory/shared/tomcat/bin folder for shutting down and starting Tomcat services.
  6. If the BMC Remedy AR System Server hostname is correct in the cmdbws.properties file then check if the webapps folder (typically located at AtriumWebServiceDirectory\BMC Software\Atrium Web Registry\wsc\webapps) contains two folders, namely atriumws7604 and cmdbws. These folders should have three subfolders named axis2-web, META-INF, and WEB-INF. If these folders are missing, copy them from an available working environment.
  7. If this is a Linux environment then make sure that the following environment variables are set before starting BMC Atrium Web Services. (These commands can also be set in the bmcAtriumSetenv.sh file. After editing the file, restart the Tomcat services.)
    • ATRIUMCORE_HOME=/opt/bmc/AtriumWebRegistry
    • export ATRIUMCORE_HOME
  8. Sometimes, there can be communication issues between the BMC Network Automation Webservices and the BMC Network Automation database. If the issue persists even after following the above steps, Restart BMC Network Automation Webservices.
    • For Windows environments, BMC Network Automation Webservices are listed as BCA-Networks Web Server under services.msc and can be started or stopped from there.
    • For Linux environments, the following command can be used to stop or start BMC Network Automation Webservices:
      service enatomcat stop / start
      The enatomcat services are listed in the /etc/init.d directory.
  9. If the problem persists after completing this procedure, contact BMC support.

Note

BMC Network Automation 8.2.03 and 8.3 have better error logging capabilities for this issue. If upgrading BMC Network Automation is required, first review the Base product versions and compatibility for upgrade page.

Back to top

Gathering PCM-specific information for BMC Support

If you need to gather detailed information about PCM behavior in order for BMC Support staff to troubleshoot issues, you can bump up the logging in the com.bmc.bcan.engine.network.pcm package in the logging.properties file and restart BMC Network Automation. This generates a lot of diagnostic details in the BCA-Networks.log.* log files while you execute subsequent PCM operations, which can then be sent to BMC Support for analysis.

Back to top

Troubleshooting dynamic network containers

Dynamic network containers add another layer to an already complex entity. You should expect to do extensive testing of any new container blueprint you create which defines conditionals for dynamic behavior. There are several tools available to help with stand alone testing of containers. These include the following:

  • You can create test containers. See Creating and testing a dynamic network container and Creating and testing a network container blueprint for a reprovisioning operation subsections of the Creating dynamic container outputs section.
  • You can control whether BMC Network Automation performs a trailing commit in configure or unconfigure Deploy to Active actions using the vdcCommitContainerActions property in the global.properties file. This property has a default value of true.
  • You can skip the rollback logic after a failed provisioning attempt using the vdcSkipProvisioningRollback property in the global.properties file. This property has a default value of false.

    For example, if the vdcSkipProvisioningRollback flag is set to true and you attempt to provision a container, and it fails with an error in action 19 out of 20, the deprovisioning rollback is skipped and the container is left in a partially provisioned state. You can then fix the error and execute a modify operation in which you do not change any toggle states. This causes BMC Network Automation to execute actions 19 and 20 from the original provision attempt, which completes the provisioning logic.
  • You can skip the rollback logic after a failed modify attempt using the skipUnmodifyFlag flag in the modify container API. This flag has a default value of false.

    For example, if the flag is set to true and you attempt to modify a container, and it fails with an error in action 19 out of 20, the unmodify rollback is skipped and the container is left in a partially modified state. You can then fix the error and re-execute the modify operation. This causes BMC Network Automation to execute actions 19 and 20 from the original attempt, which completes the modification logic.
  • You can ignore any failed actions during the modify sequence using the ignoreModifyErrorsFlag flag in the modify container API. This flag has a default value of false.
  • You can ignore any failed actions during the unmodify (rollback) sequence after a failed modify attempt using the ignoreUnmodifyErrorsFlag flag in the modify container API. This flag has a default value of false.
  • When a modify operation fails, and the automatic unmodify (rollback) operation also fails, the container is left in a partially modified state which might disrupt existing traffic, depending on the type of modifications which were being attempted. If the modifications are purely additive to the underlying configurations of the network devices involved, existing traffic is probably not disrupted. If however the modifications involved a mixture of both additions and subtractions to those configurations, existing traffic might indeed be disrupted.

    To recover from such failures, the administrator must fix the underlying cause of the failures in BMC Network Automation, then try the modification again. Generally, such fixes involve editing the underlying BMC Network Automation templates involved to correct either syntactical or logical errors, which the network devices rejected and which caused the earlier jobs to fail. The administrator must inspect the details of the failed jobs in BMC Network Automation to determine the errors in question and how they must be corrected. The administrator must possess network engineering expertise and detailed knowledge of the network container architecture in question to be able to resolve this issue.

  • You can replace the contents of a template copy within a container from the Container Details page in the BMC Network Automation user interface by performing the following steps:
    1. Open the Containers page by navigating to Network > Virtual Data Center > Containers.
    2. Locate the container in the listing that has the template copy you want to edit and click the View icon.

    3. On the Container Details page, click the Action Info Template link for the template copy.

    4. On the pop-up that displays the template information, click Edit.

    5. Edit the information in the Contents field.

    6. Click Save.

  • You can view the contents of a template copy within a container using the BMC Network Automation Containers viewer (navigate to Network > Virtual Data Center > Containers in the user interface). This is useful as a means of verifying the current contents of a template within a container after you have modified it, as described above.

IP Address Management

BMC Network Automation supports an integrated IP Address Management (IPAM) system, and it supports third-party IPAM systems via AO adapter. BMC Network Automation supports the Lucent VitalQIP and Infoblox IPAM systems with out of the box AO adapters. See Enabling IP address management.

Integrating a new third-party IPAM system

To integrate a new IPAM system into BMC Network Automation, you need to write an AO adapter for the IPAM system. See Enabling IP address management for a link to BMC Atrium Orchestrator on-line technical documentation that describes how to create a custom AO adapter.

Supporting NAT in a container

Static destination NAT is supported out-of-the-box. For more information, see About network address translation.

Dynamic source Network Address Translation (NAT) within VLBs is supported as an optional feature within the poolTypeBlueprint tags in the container blueprints to simplify the delivery of traffic between a Virtual Loadbalancer (VLB) and the Virtual Machine (VM) Network Interface Controllers (NICs) it services.

Dynamic NATing can also be implemented in a VFW node. To do this, you define an address pool within the container, which is dedicated for this purpose, and a NAT type in the node which has an address translator flagged as dynamic. You then refer to that address pool via substitution parameters in the template used to configure the VFW device for dynamic NAT. You must not acquire addresses from that pool for any other purpose within the container.

The path rule translation algorithm in BMC Network Automation is sensitive to dynamic NAT performed in the VLB, but it is not currently sensitive to dynamic NAT performed in the VFW.

Support of IPv6

IPv6 is supported at a basic level in BMC Network Automation. However, it is not supported yetIPv6 is supported at a basic level in BMC Network Automation. However, it is not supported yet in BMC Cloud Lifecycle Management.

Private and public subnets

A private subnet is a subnet of which there can be multiple copies in IPAM. A public subnet is a subnet of which there can only be a single copy in IPAM. In BMC Network Automation, each pod and container can have its own copy of a given private subnet.

Shared and non-shared subnets

A shared subnet is a subnet which can be used by multiple pods. A non-shared subnet is a subnet used exclusively by a single pod. So for instance, if a subnet is being shared by multiple pods, containers in each pod can acquire addresses from the same subnet. Each address in the subnet can be acquired only once.

Excluding addresses

Addresses can be excluded from use via the IPAM user interface or the PCM API (AddressingService.excludeIPAddresses). Such exclusions are currently applied globally, however.  This means that if there are multiple private instances of the subnet 10.0.0.0/24, and you excluded the address 10.0.0.10, it will be excluded in both subnets. If you need to exclude an address locally, within a particular subnet instance, instead of globally, you can achieve this by using the PCM API (AddressingService.acquireRogueAddress) to acquire it as a rogue address within whatever container owns the subnet.  You can find additional information in the PCM API documentation. See Using the API for information about accessing the API documentation.

Back to top

Loadbalancers

 Configuring vendor-specific devices lists the load balancer device types that are supported out-of the box.

Supporting a new virtual LB type in a container

To add support for a new VLB type, you must extend the following custom actions with logic for the device type in question: Create Pool, Delete Pool, Add New Server to Pool, Delete Server from Pool, Enable Pool Member, Disable Pool Member. If you intend to use active active fault tolerance, the underlying host pair must adhere to one of the two styles of active active fault tolerance which we currently model: community style or individual style. See  Container model.

Sharing a single physical LB across multiple containers

Since the F5 LB does not support virtualization (creation of a new guest device) adequately, we support sharing of a physical LB for the F5 LB.

Specifying additional options when creating an LB pool

There is no way to do this in a flexible way currently, where the administrator specifies values for the additional options at LB pool creation time (or at service blueprint authoring time). The best that can be done is to create multiple versions of the LB pool custom actions, each with a different set of hard coded values, and have different versions of your container blueprints reference different versions of the LB pool custom actions.

There is an enhancement planned to allow the flexibility to specify additional options in the BMC Cloud Lifecycle Management UI which are then passed down as additional runtime parameters to the BMC Network Automation custom actions.

Back to top

Firewalls

Configuring vendor-specific devices lists the firewall device types that are supported out-of the box.

Supporting a multi-ACL FW in a container

This is supported out of the box now. See Multiple ACLs on virtual firewalls.

Populating default firewall rules entries

Sometimes customers have several default entries they want to insert into the VFW ACL during container creation.  We do not support that yet out of the box.  In the meantime, this can be supported via a BMC Cloud Lifecycle Management AO callout after the container is created, which would make a call on the BMC Network Automation FirewallService to populate the desired initial values for the ACL in question.

Most of the entries in the default set are the same from one container instance to the next, and can be hard coded into callout.  Some of the entries however might be dynamic, and a function of the address pools in use by that particular container (for example, to add permit rules for internal traffic on those subnets).  The callout may need to query the BMC Network Automation ContainerService to fetch the ContainerInfoDTO for the container in question, and dig into it to inspect the AddressPoolInfoDTO for the networks of interest in order to include the necessary dynamic values.

Back to top

Hypervisor switches

Currently we support the Cisco Nexus 1000K, VMWare Vswitch (stand alone and distributed), and RHEV switch.

Supporting a new hypervisor switch type in a container

This can be done by adding a device adapter for the new switch type. The switch must support the concept of a virtual port type which can be created on it when BMC Network Automation configures it. The virtual port type is associated with a particular VLAN. Later the virtual port type is used to create a virtual port that connects a particular VM NIC to that VLAN. The act of using the virtual port type to create the virtual port during VM NIC provisioning is done by BMC Server Automation talking to the hypervisor management software (for example, VCenter).

Specifying the hypervisor context of a hypervisor switch

You used to specify the name of either the hypervisor cluster (for example, ESX cluster) or hypervisor host (for example, ESX host) which the hypervisor switch controls network traffic for. This information is no longer specified in BMC Network Automation however, it is specified in BMC Server Automation only.

Associating a single hypervisor switch with multiple clusters

Since the association in BMC Network Automation between hypervisor switches and hypervisor clusters is no longer being specified, there is nothing special you need to do in BMC Network Automation to support this anymore.

Supporting a Multi-Cluster Hypervisor Switch

We support this out of the box now.

Back to top

Configuring UCS Nodes

In the current BMC Network Automation release there is no official device adapter for communicating with the UCS device (for example, a UCS 6120), so the current Cisco reference architecture does not include a node for the UCS device in the pod or in containers created on the pod.  The assumption is that the UCS device is manually configured to trunk all possible VLANs which containers on that pod might need to use, so that VLANs do not need to be added or removed in the UCS configuration as new containers are created and deleted.

There is a device adapter that was developed in the field however for the UCS device (using its CLI), which can be used if the above approach does not meet your needs. Contact BMC Support to have this device adapter added to BMC Network Automation environment. BMC Support will import the device adapter into your environment, represent the UCS as a node in both the pod and container blueprint, and create a template for the container blueprint to use which configures the UCS with the new VLANs needed.

If you have multiple UCS devices, each associated with different N1K switches (for example, one UCS might be associated with two N1Ks, and another UCS might be associated with two other N1Ks).  A given container instance might use 1 N1K on a particular UCS, while another container instance might use a different N1K on a different UCS, in order to balance the use of the N1Ks over the available hardware (based on round robin selection of pod nodes to host container nodes on, where the pod and container node roles match up).  If this is done, it is wise, for scalability reasons, for a container to only add its VLANs to the config of the particular UCS device which is actually managing the N1K being used by that container.  This minimizes the number of virtual ports that are consumed by the VLAN within the UCS devices, to prevent you from running out of virtual ports on the UCS devices too soon.

With multiple UCS devices, it is important to construct the pod and container blueprints such that for each container instance that is created, you are guaranteed that the UCS node and N1K node in that container are hosted on UCS and N1K nodes in the pod where the underlying N1K device is being controlled by the underlying UCS device. To achieve this the pod nodes needs to selected in a true round-robin manner and to ensure that the global property roundrobinPodNodeSelection needs to be set to true. The default value of this property is false. This can be done by carefully specifying the role values for the pod nodes and container blueprint nodes.

Back to top

Active Standby fault tolerance

In BMC Network Automation, the style of fault tolerance expected in the services (FW/LB) layer is active active style. If active standby style is being used instead however, we can support that as well. For the FWSM/ASA, active standby fault tolerance is represented using a single FW host nodes encapsulating a single FW device in BMC Network Automation with a virtual IP address on which the VFW is created. The fact that there are actually two FW hosts behind the scenes and the VFW may be migrating between the two as faults occur is transparent to BMC Network Automation. For the ACE, active standby fault tolerance is represented using a pair of active active LB host nodes encapsulating two LB devices in BMC Network Automation that both have the same virtual IP address on which the VLB is created. The reason this must be done is that the ACE still requires a unique fault ID to be specified when creating the VLB context in active standby mode, and management of fault IDs for the ACE is only supported by BMC Network Automation for an active active pair currently. BMC Network Automation has been modified to make this work for active standby as well however by having it skip the inspect-fault-host action to find out where the active admin context is (instead the first node in the pair will always be assumed to have the active admin context) when the two underlying host devices involved share the same address.

Back to top

Naming patterns

The subsections that follow contain information about naming patterns.

Create a container which uses a customer defined naming pattern for port types

This can be done easily now by specifying the nameWithinSwitch tag within the portTypeBlueprint tag. The value can contain substitution parameters and whatever other syntax you desire. So for instance, if the nameWithinSwitch value was "${container.name}.MyPortType" in the blueprint, then when you create a container named My Container using that blueprint, the port type would receive the value "MyContainer.MyPortType" as its nameWithinSwitch value. The nameWithinSwitch value is what is used by BMC Cloud Lifecycle Management to identify the port type in VCenter when a VM NIC is being provisioned using it.

Creating a container which uses a customer defined naming pattern for virtual contexts

The guestDeviceName tag can be used now to fully specify the name of the device (and virtual context) to use in guest nodes. The tag allows embedded substitution parameters. If you are creating a new device, its name must be unique across the system. Normally you would specify the name using a value such as:

guestDeviceName = ${container.name}.VFW

If you were provisioning a container named "MyContainer", the resulting guest device name would be "MyContainer.VFW". Alternatively, you could specify the name using a value such as:

guestDeviceName = ${runtime.customer}.VFW

If you were provisioning a container in which the "customer" runtime parameter value was specified as "ACME", the resulting guest device name would be "ACME.VFW" (this would only be advised if a given customer was only expected to create a single container per pod).

Back to top

Provisioning containers

The topics in this section contain information about provisioning containers

Creating a container to represent an existing Virtual Data Center

If a cloud provider already has VDCs configured on his equipment, with VLANs and addresses already configured, and he wants to represent them as network containers in BMC Network Automation, there are two approaches possible to handling this situation:

  • Backdoor onboarding: One way to achieve this is to create XML files to represent each of the existing network containers and import them into BMC Network Automation using the CLI.  They must then be onboarded into BMC Cloud Lifecycle Management using a backdoor API call which is normally only used by the upgrader.
  • One Blueprint Per Container: Another way to do this is to create a pod that has a bunch of small VLAN pools and address ranges defined within it, one for each container that needs to be created.  You would then create a separate container blueprint for each container that needs to be created.  Doing this allows you to fully control which VLANs and addresses are assigned to a given container.  The container blueprint would specify an empty set of actions for configuring the devices in the container, since they are already configured to use those resources.

    Notes

    • Both of these approaches assume that one can define empty lists of configure and unconfigure actions for each node in the container, so that when the container is configured or unconfigured we don't actually talk to any devices on the customer network.  There turns out to be a gotcha however.  The job to configure or unconfigure the container must have at least one action or the BMC Network Automation job framework logic throws an error.  To work around this therefore you need to define at least one dummy action.  The action can either be a merge action where you push out a template containing just a comment line within it, or it could be a custom action which logs into the device and executes some command which does not change the device's configuration.  One should also be able to construct an external script action to do this. 
    • BMC Network Automation allows a  guest device username/password to be fully controlled from container blueprint.
    • There is currently a constraint in the system that the job to configure or unconfigure a container must have at least one action defined within it.  You can create a dummy action to workaround this constraint.  The dummy action can be to merge a do-nothing template to the device, where a do-nothing template is a template that just has a commented-out command in it.
    • Neither of these approaches supports onboarding of network environments with load balancers or firewalls.

If your existing VDC uses different user names and passwords for each guest context (VFW and VLB) in your environment, you can configure your container to use a unique guest context for each, by configuring the guestAuthenticationBlueprint tag in the container. The child tags in the guestAuthenticationBlueprint tag also support the use of substitution parameters. While creating the container, the BMC Network Automation application server creates the new DSP or reuses the already created host node DSP, based upon the parameters provided in blueprints and uses this DSP while performing any job on that guest device. For more information about the guestAuthenticationBlueprint tag, see Container blueprint XML reference.

In the following example, the guestAuthenticationBlueprint tag and its child tags are added in a container blueprint within a nodeBlueprint of a type containerFirewallHostBlueprint and containerLoadBalancerHostBlueprint:

<nodeBlueprint xsi:type="containerFirewallHostBlueprint">
    <guestAuthenticationBlueprint>
        <guestDspName>Dsp-1</guestDspName>
        <guestLoginUserName>userA</guestLoginUserName>
        <guestLoginPassword>passA</guestLoginPassword>
        <guestPrivilegedUserName>puserB</guestPrivilegedUserName>
        <guestPrivilegedPassword>p-passB</guestPrivilegedPassword>
        </guestAuthenticationBlueprint>
</nodeBlueprint>

Container blueprint does not show up in the BMC Cloud Lifecycle Management UI for provisioning a container on a pod

Only container blueprints which are compatible with the currently chosen pod show up in the selection list in the BMC Cloud Lifecycle Management UI during container provisioning.

If you have imported a new container blueprint into BMC Network Automation and do not see it appear in the BMC Cloud Lifecycle Management UI in the list of container blueprints available to use with a given pod, the reason is that there is an incompatibility detected by BMC Network Automation between the new container blueprint and the pod.  There is no way to display a description of the incompatibility from the BMC Cloud Lifecycle Management UI currently, however you can use SoapUI to make a call on the BMC Network Automation web service API to get this information.  The method to call is ContainerService.describeIncompatibility.

Allowing the customer to define the subnets used for provisioning VMs in a container

This is now supported out of the box by using address spaces to provide the address pools in a container, rather than address ranges in the pod. See Network container model.

Passing in runtime parameters during container provisioning

The BMC Cloud Lifecycle Management container provisioning UI has an Additional Parameters section where you can type enter miscellaneous name-value pairs. Currently it does not prompt you with the names for the runtime parameters required by a given container blueprint (the API exists to query this information from BMC Network Automation, but the BMC Cloud Lifecycle Management is not using it yet).

Back to top

Modifying containers

The topics in this section contain workarounds for modifying containers.

Adding a new network

The ability to toggle an existing network was implemented. See Dynamic network containers. The ability to toggle an entirely new network will be implemented in a future major release.

Adding a new zone

The ability to toggle an existing zone (as a side effect of toggling a network it encapsulates) was implemented. See Dynamic network containers. The ability to toggle an entirely new zone (as a side effect of toggling a network it encapsulates) will be implemented in a future major release.

Adding a new VFW

The ability to toggle an existing VFW (as a side effect of toggling a network it services) was implemented. See Dynamic network containers. The ability to toggle an entirely new VFW (as a side effect of toggling a network it services) will be implemented in a future major release.

Adding a new VLB

The ability to toggle an existing VLB and toggle an entirely new VLB was implemented. See Dynamic network containers.

Adding a new switch

The ability to toggle to an existing switch was implemented. See Configuring a reprovisioning operation in BMC Network Automation for network containers.

Changing the QoS of a network in a container

This is not currently supported. The ability to make miscellaneous modifications like this to a container is currently planned for a future major release.

Increasing the size of an address pool in my container

This is not currently supported.

Upgrading a static container to become flexible

A static 8.1.2 container that is upgraded to version 8.3.00 will have all of its components within it locked into the enabled state, since the templates within it to configure and unconfigure them are not written to do so independently. The ability to toggle entirely new components within the upgraded container will not be possible until a future major release.

Moving a VM from one container to another

This is not currently supported, but it is scheduled for a future major release.

Back to top

Modifying pods

The topics in this section contain workarounds for modifying pods.

Adding a new device to a pod

This is not currently supported. Support for a pod editor to do this is planned for a future major release.

Editing a pod

Editing a pod lists the steps that you should perform to edit a pod. The BCAN_HOME/public/bmc/bca-networks/csm/bsh directory includes BeanShell scripts to update and release the gateway of a container address pool. A wrapper script for invoking these BeanShell scripts is located in the BCAN_HOME/tools directory.

Back to top

Blueprints

Developing new PCM blueprints

This is discussed in Understanding out of the box content.

XML schema definition for PCM blueprints

There is no XML schema definition for PCM blueprints. We use JAXB to automatically translate between our java model and its XML representation. It is possible to have JAXB generate a corresponding XSD file, however it doesn't handle class hierarchies correctly for our purposes. It might be possible to use a tool (for example, Trang) to generate an XSD file from a sample XML file, however that approach has some rough edges as well. This is planned for a future major release.

Documentation of the XML format for PCM blueprints

See Pod blueprint XML reference and Container blueprint XML reference.

Templates

You can find a list of all PCM substitution parameters allowed in Pod substitution parameter syntax and Container substitution parameter syntax. The Substitution Parameters help menu in the BMC Network Automation template editor also lists these.

Back to top

Using Private VLANs

PVLANs are implemented by defining the port type in the container, which refers to a pod-level NIC segment, where that pod-level NIC segment encapsulates a pod-level VLAN.

The following code snippets show Example PVLAN implementations:

Pod blueprint example

<nicSegmentBlueprint>
     <addressPoolName>Pod-AddressPool</addressPoolName>
     <name>Pod-NIC-Segment</name>
     <networkName>Pod-Network</networkName>
     <vlanName>Pod-VLAN</vlanName>
     <managementFlag>true</managementFlag>
</nicSegmentBlueprint>

Container blueprint example

<nodeBlueprint xsi:type="containerHypervisorSwitchBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
     <addressBlueprints/>
         <category>2</category>
         <configureActionInfoBlueprints>
               <configureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
                    <name>Access1 Configuration</name>
                    <templateGroups>
                         <item>Access1-PVLAN-CONFIG</item>
                    </templateGroups>
               </configureActionInfoBlueprint>
         </configureActionInfoBlueprints>
     <name>Access1</name>
     <numVrfs>0</numVrfs>
     <role>Access1</role>
     <unconfigureActionInfoBlueprints>
         <unconfigureActionInfoBlueprint xsi:type="mergeActionInfoBlueprint">
               <name>Access1 Unconfiguration</name>
               <templateGroups>
                    <item>Access1-PVLAN-UNCONFIG</item>
               </templateGroups>
         </unconfigureActionInfoBlueprint>
     </unconfigureActionInfoBlueprints>
     <portTypeBlueprints>
         <portTypeBlueprint>
               <name>Access1 PVLAN PortType</name>
               <nicSegmentName>Pod-NIC-Segment</nicSegmentName>
               <nameWithinSwitch>${container.name}.Access1-PVLAN-PortType</nameWithinSwitch>
         </portTypeBlueprint>
     </portTypeBlueprints>
</nodeBlueprint>

Here are some example templates for creating a PVLAN on a Cisco 1000v:

Access1-PVLAN-CONFIG example

vlan 300
private-vlan primary
private-vlan association ${container.nodes[Access1].portTypes[Access1-PVLAN-PortType].vlan}
exit
vlan ${container.nodes[Access1].portTypes[DE66916-PVLAN-PortType].vlan}
private-vlan isolated
exit
port-profile type vethernet ${container.nodes[Access1].portTypes[Access1-PVLAN-PortType].name}
vmware port-group
switchport mode private-vlan host
switchport private-vlan host-association 300 ${container.nodes[Access1].portTypes[Access1-PVLAN-PortType].vlan}
no shutdown
state enabled
exit

Access1-PVLAN-UNCONFIG example

no vlan 300
no vlan ${container.nodes[Access1].portTypes[Access1-PVLAN-PortType].vlan}
no port-profile type vethernet ${container.nodes[Access1].portTypes[Access1-PVLAN-PortType].name}

The configurations used above were found in this Cisco document: http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/port_profile/configuration/guide/n1000v_portprof_6pvlan.html

Note

 PVLAN configuration is not supported in VMware vSwitch.

Back to top


Was this page helpful? Yes No Submitting... Thank you

Comments