Unsupported content

   

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Container model

A container is composed of the following types of elements:

A container also maintains a reference to the name of the blueprint from which it was created. A container name must be unique among all containers. The ID is a number generated by BMC Network Automation that also is unique among all containers. The type is a string value used purely for self documenting purposes. You can modify or delete the blueprint used to create a container without affecting the container that was created earlier. The structure of container blueprints mirrors the structure of containers, but it represents a reusable formulation of the information that is present within a container, which is pod-independent. For more information about creating and importing containers, see Managing network containers.

The states of certain components within a container can be toggled after the container is provisioned, which causes resources associated with those components to be acquired and released as needed. 

Simple attributes of the container can be referenced from templates used to configure it by using one of the substitution parameters:

${container.name}
{container.id}
{container.type}
{container.blueprint}

Back to top

General elements in the container

The container contains the following types of general elements:

Resources

Resources owned by the container can be categorized as resource collections and resource elements. Resource elements can be acquired either from a collection owned by the container, or a collection owned by the pod. Starting with version 8.7.00.001, resources are acquired in the order in which they are defined in the container blueprint, unless conditions are defined for them. In previous versions, resources are acquired in a random order. You can view the order in the Order column of the container details page. For the containers provisioned in version 8.7.00.001, this column shows the true order in which resources were acquired during provisioning. For the upgraded containers, this column shows the numbers that are randomly assigned to the acquired resources.

Resource collections

A resource collection is an ordered set of resource elements. At the container level this refers to address spaces, whose elements are container level address pools.

Address spaces

A container blueprint can have two types of container address spaces:

  • Infrastructure-oriented address spacewhich contains address pools that are used exclusively for infrastructure addresses.
    • It is a private address space, which means address pools acquired would have a container name as the scope.
    • A container administrator is not required to override the values of address space and pool size by default.
  • Provision-oriented address space contains address pools that must be used for provisioned addresses (VM NICs, VIPs, or SNAT blocks), but they can also be used for infrastructure addresses as needed.
    • It can be a public (it is unique within an address provider) or private address space (unique across each container).
    • A container administrator is allowed to override the values of address space and pool size by default.

Resource elements

The resource elements within a container include VLANs, address pools, and integers. Each resource element has a name that is unique among other resource elements of the same type within the container.

VLANs

A container acquires VLANs from VLAN pools in the pod. Each acquired VLAN has an ID and a name. The name and which VLAN pool to acquire the VLAN from comes from the container blueprint.

The ID of VLANs acquired by a container can be referenced by templates used to configure the container using the following syntax:

${container.vlans[<vlan-name>]}

The VLANs for a given container are acquired from VLAN pools in the pod on a first come first served basis, according to the order they are defined within the container blueprint. A VLAN acquired from a VLAN pool is guaranteed to be the lowest available VLAN within that pool.

Address pools

A container acquires address pools from address ranges in the pod. A container address pool has an address and a mask, an optional gateway address, and an ID that refers to the corresponding pool within IPAM. The container blueprint specifies which address range to acquire the pool from. The gateway address associated with the pool is populated when an address is acquired from it that is flagged as a gateway address.

Attributes of a container address pool can be referenced by templates used to configure the container using the following substitution parameter:

${container.addressPools[<address-pool-name>].<attribute>}

where <attribute> can be gatewayAddress, broadcastAddress, or subnetMask.

The address pools within a container are acquired from the address ranges in the pod on a first come first served basis, according to the order they are defined within the container blueprint. An address pool acquired from an address range is guaranteed to be the lowest available pool within that range.

Addresses

A container acquires addresses from address pools that are either owned by the container or by the underlying pod. An acquired address will have a dotted decimal value, a flag indicating whether it is virtual IP or not, and a flag indicating whether it is a gateway address or not. The container blueprint specifies which container or pod level address pool to acquire the address from, along with the flag values.

The dotted decimal value of an acquired address can be reference by templates used to configure the container using the syntax:

${container.addresses[<address-name>]}

Templates can also refer to the subnet mask of the address pool the address came from using the shorthand or CIDR format syntax:

${container.addresses[<address-name>].subnetMask}

${container.addresses[<address-name>].subnetMask.CIDR}

The addresses within a container are acquired from address pools on a first come first serve basis by default, according to the order they are defined within the container blueprint. Each address acquired is the lowest address available in the pool. Alternatively, an explicit pool position value can be specified for an address in the blueprint. The value specifies a particular position within the pool from which the address must come. If for instance you wanted to require that an address receive the .1 value from its address pool, you would specify a pool position value of 1 in the blueprint.

Integers

A container acquires integers from integer pools in the pod. Each acquired integer has an integer value and a name. The name and which integer pool to acquire the integer from all come from the container blueprint.

The value of integers acquired by a container can be referenced by templates used to configure the container using the following syntax:

${container.integers[<integer-name>]}

An integer acquired from an integer pool will be the lowest available integer within that pool.

Back to top

Network segments

A network segment is an entity which encapsulates a subnet that can be used as an endpoint in a firewall rule. This could either be an external network segment (for example, customer network segment), where the VLAN and subnet exist outside the system being managed, or an internal network segment (for example, NIC or VIP network segment), where the VLAN and subnet exist inside the system being managed. Network segments serve as endpoints in the network paths defined in a container.

A network segment has a name, but can also be annotated with a label (which for historical reasons is called a networkName attribute), and a tag value. A BMC Network Automation network segment corresponds to what BMC Cloud Lifecycle Management calls a network. The label and tag values from a BMC Network Automation network segment are used to initialize corresponding values in the corresponding BMC Cloud Lifecycle Managementnetwork.

External network segments

An external network segment is a network segment where clients reside who are talking to servers inside the container. This could be the internet (in which case the subnet will have a network address and mask of 0), or a corporate network.

Note

A subnet involved in an external network segment is not one acquired by BMC Network Automation from an IP Address Management (IPAM) system.

Internal network segments

An internal network segment is a network segment encapsulating a VLAN and address pool owned by the container, used either for connecting server NICs or load balancer VIPs.

NIC network segments

A NIC network segment is an internal network segment where server NICs can be connected. In the case of a container level NIC network segment, it encapsulates a container level VLAN (for example, data VLAN used exclusively by the container), and a container level address pool (for example, data subnet used exclusively by the container).

VIP network segments

A VIP network segment is an internal network segment where load balancer VIPs can be connected.

Back to top

Zones

A zone groups together one or more internal (NIC/VIP) network segments. This is a mechanism purely for the convenience of a container designer who wants to think of related segments as being organized into a group. Typically, though not necessarily, the segments within a zone are related by the sharing of common security characteristics (that is, they are all protected by the same firewall interface). Network segments within a zone must be uniquely named, however segments in different zones can share the same name.

For example, a container could have two zones, named production and pre-production, with each zone encapsulating NIC segments named web and db.

Use of zones is optional. The attributes of a zone can be referenced from templates used to configure the container using one of the following substitution parameters:

${zone.name}

Back to top

Nodes

Nodes encapsulate information about the network devices used within the container. A node has a name, role, and a reference to a device. A node also has a list of actions defined for how to configure and unconfigure its device, and it specifies the number of VRFs that will be configured on it. A node also has a list of acquired addresses that it owns. A node can also optionally define NAT translations that it is capable of performing for NIC/VIP addresses acquired from the container. 

A given container node will be hosted on a particular pod node, based on the nodes having matching roles. If there is more than one pod node with a matching role, the one which has hosted the fewest number of container nodes will be used, in order to balance the workload.

The container node name uniquely identifies it among all nodes within the container. The node role is a string which identifies the part which the node plays within the container. Refer to the pod node section for further discussion about node roles.

There are three types of actions supported for configuring and unconfiguring a node. The most common type is a Deploy to Active action, which specifies a template to merge to a device's running configuration. Another type is a custom action, which allows miscellaneous communication to a device. Another type is an external script action, which allows execution of an external script to perform a miscellaneous function.

For Deploy to Active actions, templates can have embedded references to the ${runtime.}, ${pod.}, and ${container.*} runtime parameters. These substitution parameters are resolved before the template is merged, using runtime parameters passed in by the cloud admin, and attributes of the pod and container instance in question. You can specify up to 255 characters in the runtime parameter value. Normally a single template will be associated with a Deploy to Active action, but multiple templates can be associated as well. When this is the case, the choice of which template to use among them will alternate from one container instance to the next, allowing one to alternate the way in which containers are configured. This mechanism is called balanced templates.

If multiple Deploy to Active actions are defined for a given node, the templates involved are concatenated together and merged via a single action in the configure or unconfigure job. Doing this boosts performance by not having to log out and back in to the same device in consecutive actions.

Note that given this new concatenation behavior, you must make sure that your templates do not contain unnecessary exit statements at the bottom which could interfere with this logic.

For custom and external script actions, a list of runtime parameters to pass to those actions can be defined in the blueprint. The parameter values can contain embedded references to ${pod.} and ${container.} values, which will be resolved before they are passed into the action. The list of runtime parameters defined in the blueprint is augmented by whatever runtime parameters were passed in by the cloud admin, before they are passed into the action. You can specify up to 255 characters in the runtime parameter value. 

The addresses acquired by a node are intended for addresses specific to that node (for example, an address for a VLAN interface within its device). Addresses acquired by the container are intended for addresses that are not specific to any particular node. The distinction between node level and container level addresses is purely a namespace convenience. Node level addresses only need to be named uniquely among other addresses within the same node.

The dotted decimal value of node level addresses can be referenced from templates using the syntax:

${container.node.addresses[<address-name>]

Templates can also refer to the subnet mask of the address pool the address came from using the shorthand or CIDR format syntax

${container.node.addresses[<address-name>].subnetMask}
${container.node.addresses[<address-name>].subnetMask
}

Note that attributes of the device encapsulated by the node can also be referenced from templates using one of the following substitution parameters:

${container.node.device.name}
${container.node.device.address}
${container.node.device.user}
${container.node.device.password}

Also note that ${container.node.*} substitution parameters can only refer to attributes of the node encapsulating the device which that template is merged to. If the template needs to refer to attributes of a different node in the container, the following alternate syntax is supported.

${container.nodes[<role-name>].*}

Special nodes

Certain types of nodes within the container are special and require additional information to be captured for them. These include nodes that encapsulate access switches to which server NICs can be attached, nodes that encapsulate load balancer hosts on which VLBs will be created, and nodes that encapsulate firewall hosts on which VFWs will be created. These special nodes are represented as different node sub-types (sub-classes) within the model.

Access switch nodes

The actions to configure and unconfigure these nodes must create and delete these virtual port types within the device. The difference between virtual port types defined in a pod access switch node and virtual port types defined in a container access switch node is that the ones defined at the pod level are used for attaching VM NICs to pod level VLANs, while the ones defined at the container level are used for attaching VM NICs to container level VLANs.

Note that attributes of virtual port types defined in the container switch node be referenced from templates using the syntax:

${container.node.portTypes[<port-type-name>].name}
${container.node.portTypes[<port-type-name>].vlan}
Hypervisor switch nodes

Nodes to which virtual servers can be attached are modeled as hypervisor switch nodes. Within these nodes are defined virtual port types. Virtual port types are used by the hypervisor switch to create a virtual port for a new VM NIC being attached to it. See the pod access switch nodes section above for additional details regarding virtual port types.

Physical switch nodes

Nodes to which physical servers can be attached are modeled as physical switch nodes. The physical switch ports to which servers can be attached are specified at pod creation time. At server provisioning time, the switch port in question will be configured by BMC Network Automation to connect to the VLAN associated with the target network. The physical switch node in the container specifies a custom action to use to perform this action during server provisioning.

Load balancer host nodes

Nodes on which VLBs will be created are modeled as load balancer host nodes. In the case of the Cisco ACE device, an LB host node would encapsulate the device in BMC Network Automation which represents the admin context within the ACE, since that is the context used to create a new guest virtual context (VLB). These load balancers can be used to balance the load on servers in multiple zones.

When the container is constructed, the LB host node automatically creates a device in BMC Network Automation to represent the new VLB, and creates a new container node to encapsulate it. If, however, your environment is running BMC Network Automation version 8.3.00.001 or later, and a guest device has already been specified in the underlying pod host node, it will be reused instead of creating a new device.

The guest device created in BMC Network Automation is by default assigned a name which is a concatenation of the container name prefix and a guest name suffix specified in the LB host blueprint, as <container-name>-<guest-name>. The primary interface of the guest device will use the same values as the primary interface of the host device (which means its username and password will be reused). The secondary interface of the guest device will be unpopulated.

The snapshot of the new guest device is deferred until the rest of the container has been successfully provisioned. If there is a problem provisioning the container, the newly created guest device will be automatically deleted.

Additionally the guest device can be shared between two container nodes. The blueprint author has to code node blueprints in such a way that the node blueprints have the same guest name and role name. However SNAT is not supported in such cases. See Container blueprint XML reference for a detailed discussion of different blueprint attributes.

Note that attributes of the guest device encapsulated by the new node can also be referenced from host templates using the syntax:

${container.node.guestDevice.name}
${container.node.guestDevice.address}
${container.node.guestDevice.user}
${container.node.guestDevice.password}

Virtual load balancer nodes

Nodes which represent a guest context within an LB host are modeled as virtual load balancer nodes. These nodes are created automatically by the LB host node when the container is created. A VLB node knows which is the client facing VLAN connected to it, and it knows the types of LB pools that can be created on it. It also optionally maintains a pool of integers to use for SNAT block IDs, if any of its LB pool types make use of SNAT. The VLB node keeps track of all LB pools that are added to it, and each LB pool keeps track of each server NIC added to it. These virtual load balancers can be used to balance the load on servers in multiple zones.

An LB pool type is defined by specification of a VIP network segment, one or more NIC network segments it balances traffic for, and optionally a specification of the size of an SNAT block to use and the address pool to acquire the block from.

Refer to the Add LB Pool to VLB use case for details about the contents of the LB pools that BMC Network Automation models. See Networking use cases.

Firewall host nodes

Nodes on which VFWs will be created are modeled as firewall host nodes. In the case of the Cisco FWSM (or ASA) device, a FW host node would encapsulate the device in BMC Network Automation which represents the admin context within the FWSM, since that is the context used to create a new guest virtual context (VFW). These firewalls can be used to service multiple zones.

When the container is constructed, the FW host node automatically creates a device in BMC Network Automation to represent the new VFW, and creates a new container node to encapsulate it. If, however, your environment is running BMC Network Automation version 8.3.00.001 or later, and a guest device has already been specified in the underlying pod host node, it will be reused instead of creating a new device.

The guest device created in BMC Network Automation is by default assigned a name which is a concatenation of the container name prefix and a guest name suffix specified in the LB host blueprint, as <container-name>-<guest-name>. The primary interface of the guest device will use the same values as the primary interface of the host device (which means its username and password will be reused). The secondary interface of the guest device will be unpopulated.

The snapshot of the new guest device is deferred until the rest of the container has been successfully provisioned. If there is a problem provisioning the container, the newly created guest device will be automatically deleted.

Additionally the guest device can be shared between two container nodes. The blueprint author must code node blueprints in such a way that the node blueprints have the same guest name and role name. Refer to the Container Blueprint XML format section for a detailed discussion of different blueprint attributes.

Note that attributes of the guest device encapsulated by the new node can also be referenced from host templates by using the following syntax:

${container.node.guestDevice.name}
${container.node.guestDevice.address}
${container.node.guestDevice.user}
${container.node.guestDevice.password}

Back to top

See Virtual firewall nodes for details about Virtual firewall nodes.

Pairs

Nodes within a container which are intended to be configured for processing redundancy, such as HSRP, or fault tolerance, are modeled as pairs. If one node's device goes down, processing of packets will be assumed by the other node's device, allowing traffic to continue to flow for the container without interruption. A pair has a name and references to two nodes. A pair can also have its own pair level acquired addresses, and its own actions for configuring and unconfiguring the nodes within the pair. The pair name must be unique among all pairs in the container.

The addresses acquired by a pair are intended for addresses specific to that pair (for example, a virtual address for HSRP). Addresses acquired by the container are intended for addresses that are not specific to any particular pair. The distinction between pair level and container level addresses is purely a namespace convenience. Pair level addresses only need to be named uniquely among other addresses within the same pair.

The dotted decimal value of pair level addresses can be referenced from templates using the syntax

${container.pair.addresses[<address-name>]}
Templates can also refer to the subnet mask of the address pool the address came from using the shorthand or CIDR format syntax
${container.pair.addresses[<address-name>].subnetMask}
${container.pair.addresses[<address-name>].subnetMask.CIDR}
Note that ${container.pair.*} substitution parameters can only refer to attributes of the pair encapsulating the device which that template is merged to. If the template needs to refer to attributes of a different pair in the container, the following alternate syntax is supported:
${container.pairs.[<pair-name>].*}

Special pairs

Certain types of pairs within the container are special and require additional information to be captured for them. These include pairs that encapsulate active-active fault tolerant host nodes. These special pairs are represented as different node sub-types (sub-classes) within the model. Refer to the pod special pairs section above for additional details about active-active fault tolerance.

A container fault host pair defines three actions used to configure and unconfigure the guest device on the pair. One action creates the guest context, a second action initializes the guest context, and a third action destroys the guest context. The action to create the guest context is executed on the host which has the currently active admin context. The action to initialize the guest context is executed on the host which you have specified as the active host for the new context. The action to delete the guest context is executed on the host which has the currently active admin context.

The two hosts are inspected via an inspect-fault-host custom action prior to executing the action to create the context, and prior to executing the action to delete the context, so that BMC Network Automation knows which hosts are currently alive, and which one has the currently active admin context.

The choice of which host you want the guest context to become active on is currently made based on whether container id is odd or even. The section below on Community Fault Host Pairs or Individual Fault Host Pairs describes in detail how the host is picked in detail for firewall or load balancer.

BMC Network Automation models two different types of active-active fault tolerance, community fault tolerance and individual fault tolerance. See below for descriptions of both.

Note that attributes of the guest device involved can also be referenced from host templates in the pair using the syntax

${container.pair.guestDevice.name}
${container.pair.guestDevice.address}
${container.pair.guestDevice.user}
${container.pair.guestDevice.password}

Community fault host pairs

One type of active-active fault tolerance that BMC Network Automation models is community fault tolerance. Refer to the pod community fault host pairs section above for details about this style of fault tolerance.

A community fault host pair knows which fault community (1 or 2) its guest context belongs to. The selection of which community to join when the guest context is created is currently made based on container ID. If container ID is even community 1 is chosen and if container ID is odd community 2 is chosen. 

The community that the guest context belongs to can be referenced from templates used to configure it by using the following syntax:

${container.communityFaultHostPair.faultId}

The value will be 1 if it belongs to community 1, and 2 if it belongs to community 2. If container ID is even community 1 is chosen and if container ID is odd community 2 is chosen.

If a community fault host pair has a container level management VLAN associated with it, that must be specified in the blueprint. If no container level management VLAN is specified, then two pod level management VLANs must have been defined. The management VLAN associated with the community fault host pair can be referenced from templates used to configure the guest context using the following syntax

${container.communityFaultHostPair.managementVlan}

If a container level management VLAN is being used, then the ID of that VLAN will be used. If two pod level management VLANs are being used, then the ID of one of those two VLANs will be used. Which pod level VLAN to use will be a function of which community the guest context belongs to, in order to guarantee that a given management VLAN is always connected to only one community.

Individual fault host pairs

Another type of active-active fault tolerance that BMC Network Automation models is individual fault tolerance. Refer to the pod individual fault host pairs section above for details about this style of fault tolerance.

An individual fault host pair knows which fault group its guest context belongs to. Its fault group ID is acquired from a pool of IDs maintained at the pod level. The id of the fault group the guest context belongs to can be referenced from templates used to configure it by using the following syntax:

${container.individualFaultHostPair.faultId}

An individual fault host pair also knows the priority value of the guest context residing on the host where it was created (that is, the host with the active admin context). This will correspond to the active priority value defined at the pod level if you chose to activate the context on that host, or it will correspond to the standby priority value defined at the pod level if you chose to activate the context on the other host. The priority value of the guest context on the host where it was created can be referenced from templates used to configure it by using the following syntax:

${container.individualFaultHostPair.guestLocalCreationPriority}

The choice of which host to activate the context on is decided based on container ID. If the container ID is even you choose the first host node defined in the pair. If the container ID is odd you choose the second host node defined in the pair.

If one host is down you choose whichever host is currently up and healthy.

Additionally, you alternate priority value of the guest based on container ID. If container ID is even you choose to activate guest on host you picked above, otherwise you choose to activate guest on peer host.

Back to top

Security elements in the container

The container contains the following types of security elements:

Virtual firewall nodes

Nodes which represent a guest context within an FW host are modeled as virtual firewall nodes. These nodes are created automatically by the FW host node when the container is created. A VFW node contains one or more interfaces, on which it manages inbound and outbound ACLs. For each interface, one specifies the list of internal network segments it directly secures traffic for (if any), and the list of interfaces on other firewalls it bridges traffic to (if any). For each ACL, BMC Network Automation maintains a sorted list of all the rules that have been added to it. For information about managing firewalls and firewall rules, see Managing firewalls for network containers in the BMC Cloud Lifecycle Management documentation.

Specifying discontiguous network masks in the Cisco ASA/FWSM firewall rules

Typically, when you specify a network as the endpoint in a firewall rule, it corresponds to a contiguous collection of addresses being matched. For these endpoints, the network mask is stored in a binary format that is a sequence of 1's followed by a sequence of 0's ( (also called contiguous network masks). For example, if you specify an endpoint with a network address, 1.1.1.0 and network mask, 255.255.255.0, it corresponds to the following contiguous collection of addresses: { 1.1.1.0 - 1.1.1.255 }.  

Starting with version 8.6.00.002, in the Cisco ASA/FWSM firewall rules, BMC Network Automation supports the ability to specify a network as the endpoint that corresponds to a discontiguous collection of addresses. For these endpoints, the network mask is stored in a binary format that is a random arrangement of 1's and 0's (also called discontiguous network masks). For example, if you specify an endpoint with a network address, 1.0.1.0 and a network mask, 255.0.255.0, it corresponds to the following discontiguous collection of addresses: { { 1.0.1.0 - 1.0.1.255 }, { 1.1.1.0 - 1.1.1.255 }, ... { 1.255.1.0 - 1.255.1.255 } }.

Note

  • BMC Network Automation does not support the ability to specify discontiguous network masks in rules for other firewall types.
  • Though discontiguous network masks were not supported in earlier versions, if you are using them in the Cisco ASA/FWSM firewall rules in versions 8.3.x or earlier, BMC recommends that you upgrade to version 8.6.00.002 in order for these firewall rules to be upgraded properly.

Back to top

Connectivity between network segments

containerVfwBlueprint contains the connectivity information of network segments. For example, the following containerVfwBlueprint snippet specifies connectivity information in terms of which network segments an interface services, with no intervening firewall between the specified firewall, VFW and the network segments.

<virtualGuestBlueprint
xsi:type="containerVfwBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">       
.             
.
.
  <name>VFW</name>               
  <managedInterfaceBlueprints>              
    <managedInterfaceBlueprint>                               
      <description>outside interface</description>                               
      <inboundAclBlueprint>                               
       <enablePathUpdates>true</enablePathUpdates>                               
        <name>outside-in</name>                               
      </inboundAclBlueprint>                               
      <name>outside</name>                               
      <servicedSegmentNames>                               
        <servicedSegmentName>External Network</servicedSegmentName>                               
      </servicedSegmentNames>               
    </managedInterfaceBlueprint>               
    <managedInterfaceBlueprint>                               
      <description>inside interface</description>                               
      <name>inside</name>                               
      <servicedSegmentNames>                               
        <servicedSegmentName>Customer Network1</servicedSegmentName>                               
        <servicedSegmentName>Customer Network 2</servicedSegmentName>                               
        <servicedSegmentName>Customer Network3</servicedSegmentName>                             
      </servicedSegmentNames>               
    </managedInterfaceBlueprint>               
  </managedInterfaceBlueprints>           
.              
.
.
</virtualGuestBlueprint>

The following diagram shows the visual connectivity representation for the VFW:

A network path specifies Layer 3 or routing information that indicates whether two network segments are routed. To specify that traffic can be routed between Customer Network 1 and External network, the blueprint author needs to specify a network path blueprint as shown in the followin`g code snippet:

<networkPathBlueprint>          
  <endpoint1Name>External Network</endpoint1Name>          
  <endpoint2Name>Customer Network 1</endpoint2Name>          
  <name>Path 1</name>          
  <servicedNodeNames>               
    <servicedNodeName>VFW</servicedNodeName>         
  </servicedNodeNames>      
</networkPathBlueprint>

If the network container contains two or or more containerVfwBlueprints, and if traffic can flow from one network segment serviced by the first VFW to another network segment serviced by the second VFW, each VFW must have one interface that bridges to the other VFW. The blueprint author must specify this information in the containerVfwBlueprint as shown in the following code snippet:

<virtualGuestBlueprint xsi:type="containerVfwBlueprint" xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance">    
.      
.
.              
  <name>VFW-1</name>           
  <managedInterfaceBlueprints>              
    <managedInterfaceBlueprint>                               
      <description>bridge interface</description>                              
        <bridgeInterfaceNames>                              
          <bridgeInterfaceName>VFW-2.bridge</bridgeInterfaceName>                               
      </bridgeInterfaceNames>                              
      <name>bridge</name>                  
.                             
.
.             
    </managedInterfaceBlueprint>              
  </managedInterfaceBlueprints>             
.             
.
.
</virtualGuestBlueprint>
<virtualGuestBlueprint xsi:type="containerVfwBlueprint" xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance">             
.             
.
.              
  <name>VFW-2</name>             
  <managedInterfaceBlueprints>              
    <managedInterfaceBlueprint>                              
      <description>bridge interface</description>                             
      <bridgeInterfaceNames>                             
        <bridgeInterfaceName>VFW-1.bridge</bridgeInterfaceName>                              
      </bridgeInterfaceNames>                              
      <name>bridge</name>                             
.                              
.
.
          
    </managedInterfaceBlueprint>              
  </managedInterfaceBlueprints>            
.            
.
.

</virtualGuestBlueprint>

If the container blueprint does not include network path blueprints, BMC Network Automation auto-generates network paths based on the graph that is built by using the connectivity information in the blueprint. VFWs and network segments serviced by the network paths represent the vertices in the graph. The VFW interface represents the edge that connects the VFW vertex to the network segment that it services.

Recommendation

BMC recommends that you have a visual representation of your network container for reference when you troubleshoot path rule-related issues in BMC Network Automation or network path-related issues in BMC Cloud Lifecycle Management.

Back to top

Network paths

The network container blueprint author can specify network paths in the network container blueprints. Network containers are assigned network paths at container creation time.

Note

A path rule in BMC Network Automation is referred to as a network path in BMC Cloud Lifecycle Management. BMC Network Automation also has a notion of a network path but that has no corresponding entity in BMC Cloud Lifecycle Management

A network path identifies layer 3 connectivity between two network segments (endpoints), and the sequence service node (virtual firewall or virtual load balancer) hops present along the way. This information is used by BMC Network Automation to translate high-level security rules (called path rules) into low-level security rules (called firewall rules). This translation allows BMC Cloud Lifecycle Management to specify the security rules for a given service offering instance (SOI) at a high level (for example, “open up HTTP traffic between endpoints A and B”), leaving it to BMC Network Automation to translate that into ACL updates on all of the intervening firewall interfaces along the path involved.

If no network paths are defined in the container blueprint, BMC Network Automation automatically generates network paths as follows:

  • If there are no firewalls, BMC Network Automation generates network paths connecting all possible endpoints (network segments) when the container is provisioned.
  • If there are firewalls, BMC Network Automation generates network paths based on the connectivity of network segments to firewalls. The administrator specifies connectivity information when he specifies the firewall guest blueprints, their managed interface blueprints, and the serviced segments of the interfaces.

This mechanism has been implemented as a convenience to support rapid prototyping. BMC Network Automation also generates this set of network paths when upgrading a container created in BMC Network Automation version 8.1.

Consider a network container with the network connectivity as shown in the following figure:

BMC Network Automation auto-generates the following network paths:

Network pathSource endpointService nodeDestination endpoint
NP-1External Network[VFW-1]Customer Network 1
NP-2External Network[VFW-1]Customer Network 2
NP-3External Network[VFW-1, VFW-2]Customer Network 3
NP-4External Network[VFW-1, VFW-2]Customer Network 4
NP-5Customer Network 1[VFW-1]Customer Network 2
NP-6Customer Network 1[VFW-1, VFW-2]Customer Network 3
NP-7Customer Network 1[VFW-1, VFW-2]Customer Network 4
NP-8Customer Network 2[VFW-1, VFW-2]Customer Network 3
NP-9Customer Network 2[VFW-1, VFW-2]Customer Network 4
NP-10Customer Network 3[VFW-2]Customer Network 4


Recommendation

Do not rely on this mechanism in real-world scenarios. BMC strongly recommends that the administrator specify network paths explicitly in the container blueprint, specifically for distributed firewalls like Virtual Security Gateway (VSG), instead of relying on BMC Network Automation to generate them.

If network paths are defined within the container blueprint, the absence of a network path connecting two endpoints together causes attempts to translate high-level security rules along such a path to fail.

Network paths are of the form <Endpoint1> - [service node]* - <Endpoint2>, where Endpoint1 and Endpoint2 are pod or container network segments. A network path may have 0 or more service nodes, where a service node is a virtual firewall or a VLB. The first service node in the sequence should have layer-2 connectivity to Endpoint1 and the last service node should have layer 2 connectivity to Endpoint2.

Network paths are created in the following sequence:

  1. Create network paths from network path blueprints in the container blueprint.
  2. If no network path blueprints are in the container blueprint, BMC Network Automation generates them as follows:
    1. Assumes routing between all interfaces of a virtual firewall and generates a connectivity graph with serviced segments of interfaces and virtual firewalls as the vertices in the graph
    2. Looks for paths between all pairs of network segments; all paths found become network paths in the container

      Note

      Network paths derived this way will have virtual firewalls as service nodes, but not virtual load balancers because BMC Network Automation does not model the connectivity of virtual load balancers to virtual firewall interfaces.

  3. If no network paths are found, there is no virtual firewall in the network container, and network paths are created between every pair of network segments in the network container, and these network paths have no service nodes.
  4. A network path blueprint potentially has sub-paths called dependent network paths, which BMC Network Automation derives, as follows:
    A network path blueprint has dependent paths if it has a virtual load balancer. A network path has one of the following forms:

           1.   EP1 -\[ no service nodes \] - EP2
           2.   EP1 -\[ VLB \]- EP2
           3.   EP1 -\[ VLB - VFW+ \]- EP2
           4.   EP1 -\[ VFW+ - VLB \]- EP2
           5.   EP1 -\[ VLB - VFW+ - VLB \]- EP2
    

    EP1 and EP2 are endpoints. EP can be a VIP network segment or a NIC network segment.

  5. If a NIC network segment is attached to a VLB, BMC Network Automation creates:
    1. A dependent network path in which the NIC segment is replaced with its associated VIP network segment
    2. A dependent path between the NIC network segment and the associated VIP segment with the VLB as the service node
  6. If the VIP network segment is attached to a VLB, BMC Network Automation creates:
    1. A dependent network path in which the VIP segment is replaced with associated NIC network segments, one network path per associated NIC network segment
    2. A dependent path between the VIP network segment and the associated NIC network segments with the VLB as the service node, one network path per associated NIC network segment
  7. All dependent paths thus derived are added as network paths of the network container.

Identity network paths

Network containers have certain implicit network paths called identity network paths that are not displayed in the container details page. An identity network path is one in which Endpoint1 and Endpoint2 are identical, it has no service nodes, and every network segment in the network container has an identity network path.

Path rules

A path rule is a high-level rule (as opposed to a firewall rule, which is a low-level rule) in which you can specify the intention of the rule at the network container level (you specify a firewall rule for an individual ACL on an interface of a network container virtual firewall). A path rule can accommodate network segment names, ad hoc networks, or hosts as the source and destination, and BMC Network Automation internally translates a path rule into updates for one or more ACLs of virtual firewalls in the network container.

You can view the path rules that are associated with a network container by viewing the container details page for that network container in the BMC Network Automation user interface. Open the container details page for a network container by navigating to the Containers page (Network > Virtual Data Center > Containers) and clicking the View icon for the network container. 

The path rules section of the container details page lists updated ACLs for the network container, and it also lists the constituent firewall rules and other details for the ACLs. After firewall rules have been added, they cannot be modified. When you add path rules, they are stored in BMC Network Automation with their associated ACL updates and copy of constituent low-level rules. If you remove or replace any of the constituent firewall rules using firewall rule APIs, the changes are reflected in the VFW node section of the container details page. You can use the two lists of firewall rules to compare the original set of rules to those that are currently in use.

BMC Network Automation supports adding and removing path rules in the SecurityService API. A request to add a path rule is valid if at least one network path supports it. In other words, if you want to add a path rule with A as the source and B the destination, the request succeeds if there is a network path (including dependent and identity network paths) where A belongs to Endpoint1 and B belongs to Endpoint2 or A belongs to Endpoint2 and B belongs to Endpoint1. If no such network path is found, the request fails. Similarly, a remove path rules operation succeeds if at least one supporting network path is found. For both, if multiple supporting network paths are found, ACL updates are done on all matching network paths.

The replaceFirewallRules() method within the SecurityService class performs the following validations:

  • Does not change the rule ID
  • Displays an appropriate message if BMC Network Automation does not recognize the replacement Rule ID
  • Adds a new firewall rule if the replacement Rule ID is not specified or if it is specified as 0
  • Prevents the addition of a firewall rule with a duplicate Rule ID for a nonmatching context host address
  • Prevents the modification or deletion of a firewall rule that is created as a NAT rule

The removeFirewallRules() method within the SecurityService class performs the following validations:

  • Prevents the deletion of a firewall rule that is associated with a path rule
  • Prevents the deletion of a firewall rule that is created as a NAT rule

Notes

  • An adhoc firewall rule, which is reused by a path rule is deleted when the associated path rule is deleted.
  • A path rule can only be added if there is a supporting network path. Instances where the path rule has both its source and destination in the same network and the network is secured by a non-access layer firewall, do not need any supporting network path. Such path rules will display no value in the Supporting Network Paths column.

Back to top

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Comments