Unsupported content

 

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Network container model

This topic explains the container model in detail.

A container is composed of the following types of elements:

  • name
  • ID
  • type (optional)
  • acquired resources
  • zones
  • nodes
  • pairs
  • segments
  • network paths

A container also maintains a reference to the name of the blueprint from which it was created. A container name must be unique among all containers. The ID is a number generated by BMC Network Automation that also is unique among all containers. The type is a string value used purely for self documenting purposes. You can modify or delete the blueprint used to create a container without affecting the container that was created earlier.

The states of certain components within a container can be toggled after the container is provisioned, which causes resources associated with those components to be acquired and released as needed. For more information, see Dynamic network containers.

Simple attributes of the container can be referenced from templates used to configure it by using one of the substitution parameters:

${container.name}
{container.id}
{container.type}
{container.blueprint}

Resources

Resources owned by the container can be categorized as resource collections and resource elements. Resource elements can be acquired either from a collection owned by the container, or a collection owned by the pod.

Resource collections

A resource collection is an ordered set of resource elements. At the container level this refers to address spaces, whose elements are container level address pools.

Address spaces

A container blueprint can have two types of container address spaces:

  • Infrastructure-oriented address space which contains address pools that are used exclusively for infrastructure addresses.
    • It is a private address space, which means address pools acquired would have a container name as the scope.
    • A container administrator is not required to override the values of address space and pool size by default.
  • Provision-oriented address space contains address pools that must be used for provisioned addresses (VM NICs, VIPs, or SNAT blocks), but they can also be used for infrastructure addresses as needed.
    • It can be a public (it is unique within an address provider) or private address space (unique across each container).
    • A container administrator is allowed to override the values of address space and pool size by default.

See Addressing schemes for more information.

Resource elements

The resource elements within a container include VLANs, address pools, and integers. Each resource element has a name that is unique among other resource elements of the same type within the container.

VLANs

A container acquires VLANs from VLAN pools in the pod. Each acquired VLAN has an ID and a name. The name and which VLAN pool to acquire the VLAN from comes from the container blueprint.

The ID of VLANs acquired by a container can be referenced by templates used to configure the container using the following syntax:

${container.vlans[<vlan-name>]}

The VLANs for a given container are acquired from VLAN pools in the pod on a first come first served basis, according to the order they are defined within the container blueprint. A VLAN acquired from a VLAN pool is guaranteed to be the lowest available VLAN within that pool.

Address pools

A container acquires address pools from address ranges in the pod. A container address pool has an address and a mask, an optional gateway address, and an ID that refers to the corresponding pool within IPAM. The container blueprint specifies which address range to acquire the pool from. The gateway address associated with the pool is populated when an address is acquired from it that is flagged as a gateway address.

Attributes of a container address pool can be referenced by templates used to configure the container using the following substitution parameter:

${container.addressPools[<address-pool-name>].<attribute>}

where <attribute> can be gatewayAddress, broadcastAddress, or subnetMask.

The address pools within a container are acquired from the address ranges in the pod on a first come first served basis, according to the order they are defined within the container blueprint. An address pool acquired from an address range is guaranteed to be the lowest available pool within that range.

Addresses

A container acquires addresses from address pools that are either owned by the container or by the underlying pod. An acquired address will have a dotted decimal value, a flag indicating whether it is virtual IP or not, and a flag indicating whether it is a gateway address or not. The container blueprint specifies which container or pod level address pool to acquire the address from, along with the flag values.

The dotted decimal value of an acquired address can be reference by templates used to configure the container using the syntax:

${container.addresses[<address-name>]}

Templates can also refer to the subnet mask of the address pool the address came from using the shorthand or CIDR format syntax

${container.addresses[<address-name>].subnetMask}
${container.addresses[<address-name>].subnetMask.CIDR}

The addresses within a container are acquired from address pools on a first come first serve basis by default, according to the order they are defined within the container blueprint. Each address acquired is the lowest address available in the pool. Alternatively, an explicit pool position value can be specified for an address in the blueprint. The value specifies a particular position within the pool from which the address must come. If for instance you wanted to require that an address receive the .1 value from its address pool, you would specify a pool position value of 1 in the blueprint.

Integers

A container acquires integers from integer pools in the pod. Each acquired integer has an integer value and a name. The name and which integer pool to acquire the integer from all come from the container blueprint.

The value of integers acquired by a container can be referenced by templates used to configure the container using the following syntax:

${container.integers[<integer-name>]}

An integer acquired from an integer pool will be the lowest available integer within that pool.

Network segments

A network segment is an entity which encapsulates a subnet that can be used as an endpoint in a firewall rule. This could either be an external network segment (for example, customer network segment), where the VLAN and subnet exist outside the system being managed, or an internal network segment (for example, NIC or VIP network segment), where the VLAN and subnet exist inside the system being managed. Network segments serve as endpoints in the network paths defined in a container.

A network segment has a name, but can also be annotated with a label (which for historical reasons is called a networkName attribute), and a tag value. A BMC Network Automation network segment corresponds to what BMC Cloud Lifecycle Management calls a network. The label and tag values from a BMC Network Automation network segment are used to initialize corresponding values in the corresponding BMC Cloud Lifecycle Managementnetwork.

External network segments

An external network segment is a network segment where clients reside who are talking to servers inside the container. This could be the internet (in which case the subnet will have a network address and mask of 0), or a corporate network.

Note

A subnet involved in an external network segment is not one acquired by BMC Network Automation from an IP Address Management (IPAM) system.

Internal network segments

An internal network segment is a network segment encapsulating a VLAN and address pool owned by the container, used either for connecting server NICs or load balancer VIPs.

NIC network segments

A NIC network segment is an internal network segment where server NICs can be connected. In the case of a container level NIC network segment, it encapsulates a container level VLAN (for example, data VLAN used exclusively by the container), and a container level address pool (for example, data subnet used exclusively by the container).

VIP network segments

A VIP network segment is an internal network segment where load balancer VIPs can be connected.

Network paths

A network path identifies layer 3 connectivity between two network segments (endpoints), and the sequence service node (virtual firewall or virtual load balancer) hops present along the way. This information is used by BMC Network Automation to translate high level security rules (called path rules) into low level security rules (called firewall rules). This translation allows BMC Cloud Lifecycle Management to specify the security rules for a given service offering instance (SOI) at a high level (for example, “open up HTTP traffic between endpoints A and B”), leaving it to BMC Network Automation to translate that into ACL updates on all of the intervening firewall interfaces along the path involved.

If no network paths are defined in the container blueprint, BMC Network Automation will automatically generate network paths connecting all possible endpoints together when the container is provisioned. If network paths are defined within the container blueprint, then the absence of a network path connecting two endpoints together will cause attempts to translate high level security rules along such a path to fail.

Zones

A zone groups together one or more internal (NIC/VIP) network segments. This is a mechanism purely for the convenience of a container designer who wants to think of related segments as being organized into a group. Typically, though not necessarily, the segments within a zone are related by the sharing of common security characteristics (that is, they are all protected by the same firewall interface). Network segments within a zone must be uniquely named, however segments in different zones can share the same name.

For example, a container could have two zones, named production and pre-production, with each zone encapsulating NIC segments named web and db.

Use of zones is optional. The attributes of a zone can be referenced from templates used to configure the container using one of the following substitution parameters:

${zone.name}

Nodes

Nodes encapsulate information about the network devices used within the container. A node has a name, role, and a reference to a device. A node also has a list of actions defined for how to configure and unconfigure its device, and it specifies the number of VRFs that will be configured on it. A node also has a list of acquired addresses that it owns. A node can also optionally define NAT translations that it is capable of performing for NIC/VIP addresses acquired from the container. See Network address translation for more information.

A given container node will be hosted on a particular pod node, based on the nodes having matching roles. If there is more than one pod node with a matching role, the one which has hosted the fewest number of container nodes will be used, in order to balance the workload.

The container node name uniquely identifies it among all nodes within the container. The node role is a string which identifies the part which the node plays within the container. Refer to the pod node section for further discussion about node roles.

There are three types of actions supported for configuring and unconfiguring a node. The most common type is a merge action, which specifies a template to merge to a device's running configuration. Another type is a custom action, which allows miscellaneous communication to a device. Another type is an external script action, which allows execution of an external script to perform a miscellaneous function.

For merge actions, templates can have embedded references to the ${runtime.}, ${pod.} and ${container.*} runtime parameters. These substitution parameters are resolved before the template is merged, using runtime parameters passed in by the cloud admin, and attributes of the pod and container instance in question. Normally a single template will be associated with a merge action, but multiple templates can be associated as well. When this is the case, the choice of which template to use among them will alternate from one container instance to the next, allowing one to alternate the way in which containers are configured. This mechanism is called balanced templates.

If multiple merge actions are defined for a given node, the templates involved are concatenated together and merged via a single action in the configure or unconfigure job. Doing this boosts performance by not having to log out and back in to the same device in consecutive actions.

Note that given this new concatenation behavior, you must make sure that your templates do not contain unnecessary exit statements at the bottom which could interfere with this logic.

For custom and external script actions, a list of runtime parameters to pass to those actions can be defined in the blueprint. The parameter values can contain embedded references to ${pod.} and ${container.} values, which will be resolved before they are passed into the action. The list of runtime parameters defined in the blueprint is augmented by whatever runtime parameters were passed in by the cloud admin, before they are passed into the action.

The addresses acquired by a node are intended for addresses specific to that node (for example, an address for a VLAN interface within its device). Addresses acquired by the container are intended for addresses that are not specific to any particular node. The distinction between node level and container level addresses is purely a namespace convenience. Node level addresses only need to be named uniquely among other addresses within the same node.

The dotted decimal value of node level addresses can be referenced from templates using the syntax

${container.node.addresses[<address-name>]}

Templates can also refer to the subnet mask of the address pool the address came from using the shorthand or CIDR format syntax

${container.node.addresses[<address-name>].subnetMask}
${container.node.addresses[<address-name>].subnetMask.CIDR}

Note that attributes of the device encapsulated by the node can also be referenced from templates using one of the following substitution parameters:

${container.node.device.name}
${container.node.device.address}
${container.node.device.user}
${container.node.device.password}

Also note that ${container.node.*} substitution parameters can only refer to attributes of the node encapsulating the device which that template is merged to. If the template needs to refer to attributes of a different node in the container, the following alternate syntax is supported.

${container.nodes[<role-name>].*}

Special nodes

Certain types of nodes within the container are special and require additional information to be captured for them. These include nodes that encapsulate access switches to which server NICs can be attached, nodes that encapsulate load balancer hosts on which VLBs will be created, and nodes that encapsulate firewall hosts on which VFWs will be created. These special nodes are represented as different node sub-types (sub-classes) within the model.

Access switch nodes

The actions to configure and unconfigure these nodes must create and delete these virtual port types within the device. The difference between virtual port types defined in a pod access switch node and virtual port types defined in a container access switch node is that the ones defined at the pod level are used for attaching VM NICs to pod level VLANs, while the ones defined at the container level are used for attaching VM NICs to container level VLANs.

Note that attributes of virtual port types defined in the container switch node be referenced from templates using the syntax:

${container.node.portTypes[<port-type-name>].name}
${container.node.portTypes[<port-type-name>].vlan}
Hypervisor switch nodes

Nodes to which virtual servers can be attached are modeled as hypervisor switch nodes. Within these nodes are defined virtual port types. Virtual port types are used by the hypervisor switch to create a virtual port for a new VM NIC being attached to it. See the pod access switch nodes section above for additional details regarding virtual port types.

Physical switch nodes

Nodes to which physical servers can be attached are modeled as physical switch nodes. The physical switch ports to which servers can be attached are specified at pod creation time. At server provisioning time, the switch port in question will be configured by BMC Network Automation to connect to the VLAN associated with the target network. The physical switch node in the container specifies a custom action to use to perform this action during server provisioning.

Load balancer host nodes

Nodes on which VLBs will be created are modeled as load balancer host nodes. In the case of the Cisco ACE device, an LB host node would encapsulate the device in BMC Network Automation which represents the admin context within the ACE, since that is the context used to create a new guest virtual context (VLB). These load balancers can be used to balance the load on servers in multiple zones.

When the container is constructed, the LB host node automatically creates a device in BMC Network Automation to represent the new VLB, and creates a new container node to encapsulate it. If, however, your environment is running BMC Network Automation version 8.3.00.001 or later, and a guest device has already been specified in the underlying pod host node, it will be reused instead of creating a new device.

The guest device created in BMC Network Automation is by default assigned a name which is a concatenation of the container name prefix and a guest name suffix specified in the LB host blueprint, as <container-name>-<guest-name>. The primary interface of the guest device will use the same values as the primary interface of the host device (which means its username and password will be reused). The secondary interface of the guest device will be unpopulated.

The backup of the new guest device is deferred until the rest of the container has been successfully provisioned. If there is a problem provisioning the container, the newly created guest device will be automatically deleted.

Additionally the guest device can be shared between two container nodes. The blueprint author has to code node blueprints in such a way that the node blueprints have the same guest name and role name. However SNAT is not supported in such cases. See Container blueprint XML reference for a detailed discussion of different blueprint attributes.

Note that attributes of the guest device encapsulated by the new node can also be referenced from host templates using the syntax:

${container.node.guestDevice.name}
${container.node.guestDevice.address}
${container.node.guestDevice.user}
${container.node.guestDevice.password}

Virtual load balancer nodes

Nodes which represent a guest context within an LB host are modeled as virtual load balancer nodes. These nodes are created automatically by the LB host node when the container is created. A VLB node knows which is the client facing VLAN connected to it, and it knows the types of LB pools that can be created on it. It also optionally maintains a pool of integers to use for SNAT block IDs, if any of its LB pool types make use of SNAT. The VLB node keeps track of all LB pools that are added to it, and each LB pool keeps track of each server NIC added to it. These virtual load balancers can be used to balance the load on servers in multiple zones.

An LB pool type is defined by specification of a VIP network segment, one or more NIC network segments it balances traffic for, and optionally a specification of the size of an SNAT block to use and the address pool to acquire the block from.

Refer to the Add LB Pool to VLB use case for details about the contents of the LB pools that we model. See Networking use cases.

Firewall host nodes

Nodes on which VFWs will be created are modeled as firewall host nodes. In the case of the Cisco FWSM (or ASA) device, a FW host node would encapsulate the device in BMC Network Automation which represents the admin context within the FWSM, since that is the context used to create a new guest virtual context (VFW). These firewalls can be used to service multiple zones.

When the container is constructed, the FW host node automatically creates a device in BMC Network Automation to represent the new VFW, and creates a new container node to encapsulate it. If, however, your environment is running BMC Network Automation version 8.3.00.001 or later, and a guest device has already been specified in the underlying pod host node, it will be reused instead of creating a new device.

The guest device created in BMC Network Automation is by default assigned a name which is a concatenation of the container name prefix and a guest name suffix specified in the LB host blueprint, as <container-name>-<guest-name>. The primary interface of the guest device will use the same values as the primary interface of the host device (which means its username and password will be reused). The secondary interface of the guest device will be unpopulated.

The backup of the new guest device is deferred until the rest of the container has been successfully provisioned. If there is a problem provisioning the container, the newly created guest device will be automatically deleted.

Additionally the guest device can be shared between two container nodes. The blueprint author must code node blueprints in such a way that the node blueprints have the same guest name and role name. Refer to the Container Blueprint XML format section for a detailed discussion of different blueprint attributes.

Note that attributes of the guest device encapsulated by the new node can also be referenced from host templates by using the following syntax:

${container.node.guestDevice.name}
${container.node.guestDevice.address}
${container.node.guestDevice.user}
${container.node.guestDevice.password}

Virtual firewall nodes

Nodes which represent a guest context within an FW host are modeled as virtual firewall nodes. These nodes are created automatically by the FW host node when the container is created. A VFW node contains one or more interfaces, on which it manages inbound and outbound ACLs. For each interface, one specifies the list of internal network segments it directly secures traffic for (if any), and the list of interfaces on other firewalls it bridges traffic to (if any). For each ACL we maintain a sorted list of all the rules that have been added to it.

Refer to the Replace All Rules in VFW ACL use case for details about the contents of the rules that we model in the VFW. See also Multiple ACLs on virtual firewalls.

Pairs

Nodes within a container which are intended to be configured for processing redundancy, such as HSRP, or fault tolerance, are modeled as pairs. If one node's device goes down, processing of packets will be assumed by the other node's device, allowing traffic to continue to flow for the container without interruption. A pair has a name and references to two nodes. A pair can also have its own pair level acquired addresses, and its own actions for configuring and unconfiguring the nodes within the pair. The pair name must be unique among all pairs in the container.

The addresses acquired by a pair are intended for addresses specific to that pair (for example, a virtual address for HSRP). Addresses acquired by the container are intended for addresses that are not specific to any particular pair. The distinction between pair level and container level addresses is purely a namespace convenience. Pair level addresses only need to be named uniquely among other addresses within the same pair.

The dotted decimal value of pair level addresses can be referenced from templates using the syntax

${container.pair.addresses[<address-name>]}

Templates can also refer to the subnet mask of the address pool the address came from using the shorthand or CIDR format syntax

${container.pair.addresses[<address-name>].subnetMask}
${container.pair.addresses[<address-name>].subnetMask.CIDR}

Note that ${container.pair.*} substitution parameters can only refer to attributes of the pair encapsulating the device which that template is merged to. If the template needs to refer to attributes of a different pair in the container, the following alternate syntax is supported:

${container.pairs[<pair-name>].*}

Special pairs

Certain types of pairs within the container are special and require additional information to be captured for them. These include pairs that encapsulate active-active fault tolerant host nodes. These special pairs are represented as different node sub-types (sub-classes) within the model. Refer to the pod special pairs section above for additional details about active-active fault tolerance.

A container fault host pair defines three actions used to configure and unconfigure the guest device on the pair. One action creates the guest context, a second action initializes the guest context, and a third action destroys the guest context. The action to create the guest context is executed on the host which has the currently active admin context. The action to initialize the guest context is executed on the host which we have specified as the active host for the new context. The action to delete the guest context is executed on the host which has the currently active admin context.

The two hosts are inspected via an inspect-fault-host custom action prior to executing the action to create the context, and prior to executing the action to delete the context, so that BMC Network Automation knows which hosts are currently alive, and which one has the currently active admin context.

The choice of which host we want the guest context to become active on is currently made based on whether container id is odd or even. The section below on Community Fault Host Pairs or Individual Fault Host Pairs describes in detail how the host is picked in detail for firewall or load balancer.

We model two different types of active-active fault tolerance, community fault tolerance and individual fault tolerance. See below for descriptions of both.

Note that attributes of the guest device involved can also be referenced from host templates in the pair using the syntax

${container.pair.guestDevice.name}
${container.pair.guestDevice.address}
${container.pair.guestDevice.user}
${container.pair.guestDevice.password}

Community fault host pairs

One type of active-active fault tolerance we model is community fault tolerance. Refer to the pod community fault host pairs section above for details about this style of fault tolerance.

A community fault host pair knows which fault community (1 or 2) its guest context belongs to. The selection of which community to join when the guest context is created is currently made based on container ID. If container ID is even community 1 is chosen and if container ID is odd community 2 is chosen. 

 The community that the guest context belongs to can be referenced from templates used to configure it by using the following syntax:

${container.communityFaultHostPair.faultId}

The value will be 1 if it belongs to community 1, and 2 if it belongs to community 2. If container ID is even community 1 is chosen and if container ID is odd community 2 is chosen.

If a community fault host pair has a container level management VLAN associated with it, that must be specified in the blueprint. If no container level management VLAN is specified, then two pod level management VLANs must have been defined. The management VLAN associated with the community fault host pair can be referenced from templates used to configure the guest context using the following syntax

${container.communityFaultHostPair.managementVlan}

If a container level management VLAN is being used, then the ID of that VLAN will be used. If two pod level management VLANs are being used, then the ID of one of those two VLANs will be used. Which pod level VLAN to use will be a function of which community the guest context belongs to, in order to guarantee that a given management VLAN is always connected to only one community.

Individual fault host pairs

Another type of active-active fault tolerance we model is individual fault tolerance. Refer to the pod individual fault host pairs section above for details about this style of fault tolerance.

An individual fault host pair knows which fault group its guest context belongs to. Its fault group ID is acquired from a pool of IDs maintained at the pod level. The id of the fault group the guest context belongs to can be referenced from templates used to configure it by using the following syntax:

${container.individualFaultHostPair.faultId}

An individual fault host pair also knows the priority value of the guest context residing on the host where it was created (that is, the host with the active admin context). This will correspond to the active priority value defined at the pod level if we chose to activate the context on that host, or it will correspond to the standby priority value defined at the pod level if we chose to activate the context on the other host. The priority value of the guest context on the host where it was created can be referenced from templates used to configure it by using the following syntax:

${container.individualFaultHostPair.guestLocalCreationPriority}

The choice of which host to activate the context on is decided based on container ID. If the container ID is even we choose the first host node defined in the pair. If the container ID is odd we choose the second host node defined in the pair.

If one host is down we choose whichever host is currently up and healthy.

Additionally we alternate priority value of the guest based on container ID. If container ID is even we choose to activate guest on host we picked above, otherwise we choose to activate guest on peer host.

Container blueprint

The structure of container blueprints mirrors the structure of containers, but it represents a reusable formulation of the information that is present within a container, which is pod independent.

Was this page helpful? Yes No Submitting... Thank you

Comments