Unsupported content

 

This version of the product has reached end of support. The documentation is available for your convenience. However, you must be logged in to access it. You will not be able to leave comments.

Pod model

This topic describes the pod model that is specified in a pod blueprint XML file. The structure of pod blueprints mirrors the structure of pods, but it represents a reusable formulation of the information that is present within a pod, which is device independent. Much of the information is in the form of optional defaults that are presented in the user interface when creating a pod from that blueprint.

A pod is composed of the following types of elements: name, ID, site ID, parameters, resource collections, nodes, pairs, and NIC segments. A pod also maintains a reference to the name of the blueprint which created it. A pod name is unique among all pods. The ID is also unique among all pods, and is a number generated by BMC Network Automation. The site ID must refer to a BMC_PhysicalLocation object in the CMDB (or it can be null if the site is unknown). One is free to modify or delete the blueprint used to create a pod without affecting the pod.

Simple attributes of the pod can be referenced from templates used to configure containers on it using the syntax: ${pod.*}

where the asterisk can be "name" or "id."

Parameters

Parameters refer to miscellaneous user-defined attributes of the pod, and can either be simple parameters or what are called balanced parameters. The parameters are used as a mechanism for capturing additional pod level information that can later be referenced symbolically from templates used to configure containers on the pod.

A simple parameter has a name and associated string value. A simple parameter value can be referenced in a template used to configure a container as follows:

${pod.params[<param-name>]}

A balanced parameter has a name and a set of associated string values. A balanced parameter value can be referenced in a template used to configure a container as follows:

${pod.balancedParams[<balanced-param-name>]}

Such a reference will resolve to one particular value among the set of values associated with the balanced parameter. The particular value used will alternate from one container instance to the next, guaranteeing that use of the values will be balanced across the containers created on the pod. One might use a balanced parameter for instance to be able to alternate HSRP priority values to use when configuring one container instance to the next, to balance the traffic between redundant devices in the pod.

Back to top

Resources

Resources owned by the pod can be categorized as resource collections and resource elements. Resource collections contain sets of elements which can be acquired either by the pod itself, in order to be shared by all containers in the pod, or by individual containers for their exclusive use.

Resource collections

The resource collections within a pod are an ordered set of resource elements, including VLAN pools, pod address pools, and address ranges. In the case of an address pool, it is a collection of address elements. In the case of a VLAN pool, it is a collection of VLAN elements. In the case of an address range, it is a collection of address pool elements. Resource collections also include user-defined integer pools. Each resource collection has a name that is unique among other resource collections of the same type within the pod.

VLAN pools

A VLAN pool defines inclusive start and end boundaries for a pool of VLAN IDs that can be acquired, either by the pod itself, or by containers created on the pod, on a first come first serve basis. For a VLAN acquired by the pod, the ID of the VLAN is specified by the user in the pod creation wizard. For a VLAN acquired by a container, the ID will be whatever the next available ID is within the pool. The next available ID within the pool will be the lowest available ID. Specific VLANs can be marked as excluded from use within a pool (e.g. to represent VLANs already in use in the cloud provider network which should not be used for VDCs). VLAN pools cannot be shared across pods.
The ID of a VLAN acquired by the pod (e.g. for a pod level managementVLAN) can be referenced from templates used to configure containers as

${pod.vlans[<vlan-name>]}

The following diagram shows an example of a pod with two VLAN pools:

Pod with two VLAN pools

In this diagram, there is one pool for infrastructure VLANs spanning 100-199, and another for server VLANs spanning 200-349. When the pod was constructed, it acquired infrastructure VLAN 100 for its own use (e.g. for a pod level management VLAN), and infrastructure VLAN 101 was marked as excluded entirely. If you create 2 containers on this pod, both from the same blueprint requiring 2 infrastructure VLANs and 3 server VLANs, then the first container will acquire infrastructure VLANs 102 and 103 and server VLANs 200 thru 202, and the second will acquire infrastructure VLANs 104 and 105 and server VLANs 203 thru 205. You could add 50 such containers to this pod before exhausting both VLAN pools.

Back to top

Pod address pools

A pod address pool defines a subnet acquired from IPAM, which can be shared by different containers in the pod, with each container capable of acquiring its own exclusive addresses from IPAM within that pool. When an address is acquired from IPAM, the address returned will be the lowest available address in that pool. A pod address pool has an address and a mask, an optional network name, an optional gateway address, and an ID which refers to the corresponding subnet within IPAM. It also has a flag indicating whether it is a public or a private pool.

If a pod address pool is intended to be used to acquire addresses for server NICs attached to containers, it must have a populated network name and gateway address. When a server NIC is attached to a container, BMC Cloud Lifecycle Management tells BMC Network Automation the name of a network to attach it to, and BMC Network Automation looks for an address pool with a matching network name to use in acquiring its address. It passes back the acquired address to BMC Cloud Lifecycle Management, along with the gateway address and mask of the pool that was used.
Public pools are unique within IPAM. When a pod acquires a public pool from IPAM it reserves the pool so that no other pod (or other IPAM client) may use it. When a pod acquires a private pool from IPAM it receives its own copy of the pool for it to use. Private pools can overlap each other, as long as they are acquired by different pods, since they are separate copies. Public pools can never overlap each other.

A pool, whether public or private, can either be used exclusively by a given pod, or it can be shared across multiple pods if it is flagged as shareable when the pod is created. For shared pools, each container in each pod acquires its own exclusive addresses from the same pool.

Attributes of pod address pools can be referenced from templates used to configure containers on the pod as follows:

${pod.addressPools[<address-pool-name>].*}

where the asterisk can either be "gatewayAddress", "networkAddress", "broadcastAddress", or "subnetMask".

When a pod is deleted, its pools are released back to IPAM, unless there are other pods still sharing them.

Back to top

Address ranges

An address range defines a space divided up into equally sized address pools. An address range has an address and a mask that defines the space, plus another mask that defines the size of each pool within the range. It also has a flag indicating whether the range consists of public or private pools. When the pod is created, the address pools within each range are acquired from IPAM.

Containers acquire pools from the ranges in the pod on a first come first serve basis. A container address pool will have an address and a mask, an optional network name and gateway address (if it is intended to be used for server NICs), and the ID of the corresponding pool in IPAM. These address pools are owned exclusively by the container and used to acquire addresses from them via IPAM. When an address is acquired from IPAM, the address returned will be the lowest available address in that pool. When the container is deleted, its address pools are returned back to the pod, and any addresses in use within those pools are released back to IPAM.

When the pod is deleted, all the pools in its address ranges are returned back to IPAM. Address ranges cannot be shared across pods. Note that the pod address pools mentioned in the section above are not part of any address range. Address ranges are only used for container address pools.

The following diagram shows an example of a pod with two address ranges:

Pod with two address ranges


In this diagram, the pod has defined two private address ranges (shown in gray), one for infrastructure address pools and another for server address pools. The address pools here are all container address pools. No pod address pools are involved. The infrastructure range spans the 11.0.0.0/29 space, and is divided up into two /30 address pools. The server range spans the 12.0.0.0/23 space, and is divided up into two /24 address pools. When the pod is created, private pools are created in IPAM on behalf of both ranges. The creation of the infrastructure range causes the 11.0.0.0/30 and 11.0.0.4/30 pools to be created in IPAM, and the creation of the server range causes the 12.0.0.0/24 and 12.0.1.0/24 pools to be created in IPAM. These are given pool IDs 1 thru 4 in IPAM.

Consider that you create containers on this pod, each from the same blueprint, which require 1 infrastructure pool and 1 server pool (which will serve NICs attached to the "data" network). The blueprint further specifies that 2 addresses (named A1 and A2) need to be acquired from the infrastructure pool, and 1address (named A3 and flagged as a gateway address) needs to be acquired from the server pool.

When you create the first container on the pod using this blueprint, it will acquire infrastructure pool 11.0.0.0/30 (IPAM pool ID 1) from the infrastructure range, and server pool 12.0.0.0/24 (IPAM pool ID 3) from the server range. It will then acquire address A1 from its infrastructure pool and receive 11.0.0.1 from IPAM, A2 from its infrastructure pool and receive 11.0.0.2 from IPAM, and A3 from its server pool and receive 12.0.0.1 from IPAM (which will be assigned as the gateway address for the server pool).

When you create the second container on the pod using this blueprint, it will acquire infrastructure pool 11.0.0.4/30 (IPAM pool ID 2) from the infrastructure range, and server pool 12.0.1.0/24 (IPAM pool ID 4) from the server range. It will then acquire address A1 from its infrastructure pool and receive 11.0.0.5 from IPAM, A2 from its infrastructure pool and receive 11.0.0.6 from IPAM, and A3 from its server pool and receive 12.0.1.1 from IPAM (which will be assigned as the gateway address for the server pool).

Note that you can only create 2 such containers on this pod before exhausting the address ranges, so it is a rather unrealistic example.

Back to top

User-defined integer pools

The user can define a pool of integer values in the pod, from which containers can come along and acquire the next available value. A user-defined integer pool is similar to a VLAN pool, but its boundaries are not constrained to 1-4096. It is intended to represent a miscellaneous pool of integer values. A typical use case would be a pool of reusable ID values for containers created on the pod, where each ID would be guaranteed to be a unique number from the pool.

An integer pool defines inclusive start and end boundaries for a pool of integer values that can be acquired by containers created on the pod, on a first come first serve basis. The value acquired will be the lowest available value within the pool. Specific values can be marked as excluded from within the pool (for example, to represent values already in use for some other purpose).

The value acquired by a container can be referenced from templates used to configure it as follows:

${container.integers[<integer-name>]}.

Back to top

Chaining resource collections

Chaining of resource collections is supported via a CLI mechanism, and involves adding a resource collection to a pod after it has been created, in order to scale it out as needed. The resource collection added must match the name of an existing collection. Containers provisioned on the pod acquire elements from collection chain by attempting to acquire them from each link, in order, until it finds a link that has a free element.

For example, if the pod initially had a VLAN pool named MyVlanPool, with VLAN ID boundaries of 100-199, you could later use the CLI to add a second VLAN pool, also named MyVlanPool, with VLAN ID boundaries of 300-399. If a container is provisioned on the pod and needs to acquire a VLAN from MyVlanPool, it first attempts to acquire it from the 100-199 pool, but if none are available there, it will try to acquire it from the 300-399 pool. This is supported for all resource collections in the pod except for user-defined integer pools.

Resource elements

The resource elements within a pod include VLANs. Each resource element has a name that is unique among other resource elements of the same type within the pod.

Back to top

VLANs

A pod acquires VLANs from VLAN pools in the pod. Each acquired VLAN has an ID and a name. The name, and which VLAN pool to acquire the VLAN from all come from the pod blueprint. The ID of VLANs acquired by a pod can be referenced by templates used to configure containers using the following syntax:

${pod.vlans[<vlan-name>]}

When creating the pod one specifies the desired ID of each pod level VLAN to acquire from the VLAN pool in question.

NIC network segments

A NIC network segment is an entity in the model which encapsulates a VLAN and an address pool where server NICs can be connected. In the case of a pod level NIC network segment, it encapsulates a pod level VLAN (for example, management VLAN shared by all containers), and an optional pod level address pool (for example, management subnet shared by all containers).

Back to top

Nodes

Nodes encapsulate information about the network devices present within the pod. A node has a name, role, and a reference to a device. A node can define a maximum for the number of VRFs the pod allows on the device. A node can also have its own node level parameters.

The node name uniquely identifies it among all nodes within the pod. The node role is a string which identifies the part which the node plays within the pod. If the node encapsulates a device which serves as the access switch within the pod, you might assign it a role of "access" for instance.

The pod node role is matched against container node roles when a container is being created on a pod, in order to figure out how to align the logical container on the underlying physical pod. For instance, if a container blueprint defines a container node whose role is "access", when that blueprint is used to create a container on a pod, the container node with the "access" role will be hosted on the pod node which also has an "access" role. Hosting a container node on a pod node means that they both encapsulate the same underlying device, which means that the template defined in the container blueprint to configure the "access" container node will be merged to the device encapsulated by the "access" pod node.

Node name and node roles will frequently have the same value. They need only be different in cases where redundant nodes are present for reasons of scale. For instance, if you have a pod in which multiple firewall host nodes are present for scalability, they might be named "firewall host A" and "firewall host B", but they would each have the role "firewall host". When there are multiple pod nodes with the same role, the choice of which to use when hosting a new container node will be made based on which pod node is currently hosting the least number of container nodes, which causes the hosting load in the pod to be spread evenly across the redundant hardware.

Node level parameters are referenced from container templates using the syntax as follows:

${pod.node.params[<param-name>]}

or

${pod.node.balancedParams[<balanced-param-name>]}

Node level parameters should be used to represent attributes of a pod node which you expect to only need to be referenced in templates used to configure that particular nodes device. Pod level parameters should be used to represent attributes of the overall pod, which you expect to need to be referenced in templates used to configure miscellaneous node devices. The distinction between node level and pod level parameters is purely a namespace convenience. Node level parameters only need to be named uniquely among other parameters within the same node.

Note the devices in pod nodes are assumed to have already been configured appropriately before any attempt is made to add containers to the pod. BMC Network Automation does not configure any devices when a pod is created. Any pod level infrastructure such as shared management VLANs, or attributes in a host which you want to reference in guests created on that host, must be configured appropriately by the admin who creates the pod.

Back to top

Special nodes

Certain types of nodes within the pod are special and require additional information to be captured for them. These include nodes that encapsulate access switches to which server NICs can be attached, nodes that encapsulate loadbalancer hosts on which VLBs will be created, and nodes that encapsulate firewall hosts on which VFWs will be created. These special nodes are represented as different node sub-types (sub-classes) within the model.

Access switch nodes

This section describes the types of access switch nodes supported in the BMC Network Automation pod model.

Hypervisor switch nodes

Nodes to which virtual servers can be attached are modeled as hypervisor switch nodes. These nodes have a Hypervisor Context which originally needed to be populated with the name of the hypervisor server (or cluster) which the switch controls. That field is no longer used, so the value entered for it does not matter. Within these nodes can be defined virtual port types. A hypervisor switch node can define a maximum for the number of virtual port types allowed, and a maximum for the number of virtual ports allowed.

Virtual port types are called port groups in VMware VSwitch devices, or port profiles in Cisco Nexus 1000V devices. A given virtual port type is used by the hypervisor switch to create a virtual port for a new VM NIC being attached to it. The virtual port type defines which NIC network segment the NIC will be connected to when a virtual port is created from it. Each virtual port type is also tagged with a flag indicating whether its network is a management network or not, so that BMC Cloud Lifecycle Management can easily differentiate between management and non management networks.

Any such pod level virtual port types are assumed to have already been created on the underlying hypervisor switch by the administrator who created the pod. BMC Network Automation does not create them automatically. Attributes of virtual port types defined in the pod switch node can be referenced from the templates of containers using the syntax

${pod.node.portTypes[<port-type-name>].*}

where the asterisk can either be name, or vlan.

Physical switch nodes

Nodes to which physical servers can be attached are modeled as physical switch nodes. The physical switch ports to which servers can be attached are specified at pod creation time. At server provisioning time, the switch port in question is configured by BMC Network Automation to connect to the VLAN associated with the target network.

Loadbalancer host nodes

Nodes on which VLBs will be created are modeled as load balancer host nodes. In the case of the Cisco ACE device, an LB host node would encapsulate the device in BMC Network Automation which represents the admin context within the ACE, since that is the context one must modify to new guest virtual context (VLB). An LB host node can define a maximum for the number of VLBs allowed on it. It can also define a maximum for the number of LB pools that can be created within a VLB, and a maximum for the number of entries that can be added to an LB pool.

For environments running BMC Network Automation version 8.3.00.001 and later, these nodes also allow you to optionally specify an existing VFW guest device for containers to share if they like, rather than having them create a separate VFW per container.

Firewall host nodes

Nodes on which VFWs will be created are modeled as firewall host nodes. In the case of the Cisco FWSM (or ASA) device, a FW host node would encapsulate the device in BMC Network Automation which represents the admin context within the FWSM, since that is the context one must modify to create a new guest virtual context (VFW). A FW host node can define a maximum for the number of VFWs allowed on it. It can also define a maximum for the number of rules that can be added to the ACL within the VFW.

For environments running BMC Network Automation version 8.3.00.001 and later, these nodes also allow you to optionally specify an existing VFW guest device for containers to share if they like, rather than having them create a separate VFW per container.

Back to top

Pairs

Nodes within a pod which are intended to be configured for processing redundancy within a container, such as HSRP, or fault tolerance, are modeled as pairs. If one node's device goes down, processing of packets will be assumed by the other node's device, allowing traffic to continue to flow for containers on the pod without interruption. A pair has a name and references to two nodes. A pair can also have its own pair level parameters. The pair name must be unique among all pairs in the pod.

Pair-level parameters are referenced from container templates using the syntax as follows:

${pod.pair.params[<param-name>]}

or

${pod.pair.balancedParams[<balanced-param-name>]}

Pair-level parameters should be used to represent attributes of the pair, which you expect to only need to be referenced in templates used to configure the particular devices in the pair. Pod level parameters should be used to represent attributes of the overall pod, which you expect to need to be referenced in templates used to configure miscellaneous node devices. The distinction between pair level and pod level parameters is purely a namespace convenience. Pair level parameters only need to be named uniquely among other parameters within the same pair.

Back to top

Special pairs

Certain types of pairs within the pod are special and require additional information to be captured for them. These include pairs that encapsulate active-active fault tolerant host nodes. These special pairs are represented as different node sub-types (sub-classes) within the model.

In fault tolerant host pairs, guest contexts (VLBs and VFWs) are created on both hosts in the pair, and flagged as active on one host (meaning the guest will process packets there) and standby on the other host (meaning the guest will not process packets there). Active-active fault tolerance is a style of fault tolerance in which guests can be are flagged as active on either host. Half the guests will typically be active on one host and half will be active on the other host, so that the processing workload is balanced between the two hosts.

When a new guest context is created, BMC Network Automation must log into the host that contains the currently active admin context in order to execute the creation command. That host automatically propagates the command to the other host so that the guest context is created on both. Afterwards BMC Network Automation must log into whichever host you want told the guest context to be active on, to finish initializing its configuration. Changes made to the active guest are automatically propagated to the standby guest by the underlying hosts.

Because of this need by BMC Network Automation to talk to both hosts potentially when creating and initializing new guests, active active fault tolerance must be modeled as a pair of nodes. If active standby fault tolerance is used, you can instead just use a virtual IP address to talk to whichever host is currently active, and therefore you do not need to model it as a pair of nodes in that case.

Although a guest context gets created on both hosts, only one guest device will be represented for it in BMC Network Automation, because BMC Network Automation talks to it once it is created using a virtual IP address that will always point to the active guest context. Any guest related resource maximums defined by host nodes within the pair should have identical values, since they will be applied against a single guest node in the container.

BMC Network Automation models two different types of active-active fault tolerance, community fault tolerance and individual fault tolerance. See below for descriptions of both. BMC Network Automation assumes that hosts involved in such pairs have already been configured by the admin who created the pod, to enable active-active fault tolerance on them.

Back to top

Community fault host pairs

One type of active-active fault tolerance that BMC Network Automation models is called community fault tolerance, where the guest context belongs to one of two communities (groups) of guest contexts. All guests within a community are active on the same host. If a guest encounters a fault while active on a given host, every guest in that community is deactivated on that host and activated on the other host. The Cisco FWSM and ASA devices use community fault tolerance. There is a link in the references section of this document to additional Cisco documentation on this style of fault tolerance.

When specifying community fault host pairs in a pod, if one is using a pod level management VLAN, there must actually be two such VLANs available for use by the pair. The reason is that there is a constraint in the FWSM devices that a given VLAN can only connect to guest contexts belonging to the same community. So there must be one management VLAN connected to community 1 and another management VLAN connected to community 2. The pod pair must specify the names of each of these management VLANs, so that when a guest context is created on the pair, it is connected to the management VLAN appropriate for it, depending on which community the guest belongs to.

Additionally each community should have its own address pool for addressing. When specifying container fault host pair, if pod level addressing is used there should be 2 pod level pools specified one for community 1 and another for community 2 guests. Refer to the section "Container Blueprint XML Format" for complete documentation on specifying community level address pools in blueprints.

For environments running BMC Network Automation version 8.3.00.001 and later, these pairs also allow you to optionally specify an existing guest device for containers to share if they like, rather than having them create a separate guest device per container.

Back to top

Individual fault host pairs

The other type of active-active fault tolerance that BMC Network Automation models is called individual fault tolerance, where the guest context does not belong to a community. If a guest encounters a fault while active on a given host, only that guest is deactivated on the host and activated on the other host. The Cisco ACE device uses individual fault tolerance. There is a link in the references section of this document to additional Cisco documentation on this style of fault tolerance.

When specifying individual fault host pairs in a pod, one must specify the active and standby priority values to use for guests that will be created on the pair. One must also specify the inclusive boundaries for a pool of integers to use for these fault IDs. Each guest that is created is assigned a unique ID from this pool. IDs within the pool which you do not want to be used by newly created guests can be marked as excluded (for example, for the ID for the admin context which already exists in the host).

Fault ID pools are resource collections, similar to the ones described earlier, but they are owned by a particular individual fault host pair, as opposed to those described earlier which were owned by the overall pod.

For environments running BMC Network Automation version 8.3.00.001 and later, these pairs also allow you to optionally specify an existing guest device for containers to share if they like, rather than having them create a separate guest device per container.

Back to top

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Comments