Important

   

Starting from version 8.9.03, BMC Network Automation is renamed to TrueSight Network Automation. This space contains information about TrueSight Network Automation 8.9.03 and the later service packs for 8.9. For earlier releases, see BMC Network Automation 8.9.

Overview of networking use cases

This section describes the use cases for the interactions between TrueSight Network Automation and BMC Cloud Lifecycle Management.

See API concepts in the BMC Cloud Lifecycle Management documentation for details about the call syntax used in each of the interactions.

Provisioning network containers

In this use case the container administrator uses BMC Cloud Lifecycle Management to select the following and pass them to TrueSight Network Automation:

  • Pod
  • Compatible container blueprint
  • Container name
  • Runtime parameters
  • Dynamic component overrides, to override the default toggle state of the components (Network Interface Controller (NIC) segment or Virtual Loadbalancer (VLB)), as defined in the blueprint
  • Dynamic addressing overrides, addressing boundaries defined in the blueprint whose default values can be overridden

The runtime parameters are name-value pairs that are used to resolve any ${runtime.*} substitution parameter references in the templates used by the container blueprint to configure the network devices. You can specify up to 255 characters in the runtime parameter value. 

The first step performed by TrueSight Network Automation is to create the container in its database. It takes the list of dynamic component overrides and dynamic addressing overrides and integrates them into a temporary container blueprint copy. It creates the container using the temporary container blueprint and propagates state in the container.

The second step performed by TrueSight Network Automation is to acquire all necessary resources for the currently enabled components within the container. The main resources are VLANs, address pools, and group IDs used for fault tolerant device pairs, for example, Cisco Application Control Engine (ACE) VLB pairs, which will be pulled from pools of fixed size within the pod, or directly from Internet Protocol Address Management (IPAM) in the case of address pools within an address space. Additional resources include new Virtual Routing and Forwarding (VRF) on various devices, VLBs on Loadbalancer (LB) host devices, Virtual Firewall (VFW) on Firewall (FW) host devices, and virtual port types (VMware port groups or Cisco port profiles) on hypervisor switch devices (Cisco Nexus 1000V or VMware VSwitch devices), which might each be limited by maximums defined in the pod. If the pod does not have capacity for the resources being requested, the attempt fails, and no resources are consumed. Resources are acquired in the order in which they are defined in the container blueprint, unless conditions are defined for them.

The third step performed by TrueSight Network Automation is to create and execute a job to configure the necessary devices for this container. The job is created on the fly using a sequence of actions specified in the container blueprint, on the devices present in the pod. Only actions needed for the currently enabled components are executed. These actions typically involve merging a template to a device. In addition to ${runtime.*} substitution parameter references, the templates can have ${container.*} substitution parameter references for the various resources owned by the container, and ${pod.*} substitution parameter references for the various attributes of the pod, for example, shared VLANs.

The actions in the configure job execute in sequence. If the job succeeds, TrueSight Network Automation returns a representation of the container to BMC Cloud Lifecycle Management, which adds it to the cloud database. If any action fails, the job is marked as failed and the remaining actions are not attempted. If the job fails, TrueSight Network Automation automatically kicks off an operation to deprovision the container, which creates a second job to unconfigure the container. Only actions to unconfigure devices which were modified during the configure attempt are included in the job. There is a vdcSkipProvisioningRollback flag that can be used to skip the automated deprovisioning operation, which is sometimes useful when troubleshooting.

The actions in the unconfigure job execute in sequence as well. If the job succeeds, the container releases its resources back to the pod (and IPAM), and is removed from the TrueSight Network Automation database. If any action fails, the job is marked as failed but it continues to attempt all the remaining actions. If the job fails, the container remains in the TrueSight Network Automation database, with its resources intact. The container administrator needs to troubleshoot the unconfigure job in TrueSight Network Automation to determine why it failed, then try the deprovision operation again from the TrueSight Network Automation CLI to finish unconfiguring components and remove the container from the TrueSight Network Automation database.

Back to top

Acquiring Vmware server NIC addresses

In this case, BMC Cloud Lifecycle Management calls TrueSight Network Automation during provisioning of a Vmware server NIC to acquire an address for it. BMC Cloud Lifecycle Management passes in the container name, server name, NIC name, switch selection, and attach point name desired.

TrueSight Network Automation selects the particular hypervisor switch with a virtual port type matching the specified attach point. TrueSight Network Automation acquires the next available address from IPAM in the address pool of the NIC segment associated with the virtual port type.

A check is made by TrueSight Network Automation to make sure there is room to create another virtual port on the selected hypervisor switch, against a maximum that may have been specified in the pod. If there is no room left, the call will fail and no resources are consumed.

The fact that the server NIC is assigned the acquired address (if any) and is attached to the specified hypervisor switch is recorded in the TrueSight Network Automation database. Information about the address is passed back to BMC Cloud Lifecycle Management. Information about the address includes its value and the value of the gateway address and subnet mask of the pool the address came from.

BMC Cloud Lifecycle Management uses the address information to acquire a hostname from DNS and to configure the NIC appropriately in the server OS, via BMC Server Automation. It uses the port type information to talk to VCenter to create a virtual port using the specified virtual port type, and attach the server NIC address to it, which connects it to the vlan associated with the NIC segment.

Note that TrueSight Network Automation does not communicate with any network devices in this sequence.

Back to top

Acquiring Citrix server NIC addresses

The use case for adding a Citrix server NIC to a container is same as for acquiring a VMware Server NIC address. Citrix servers are considered rogue servers from the TrueSight Network Automation perspective, as opposed to approved servers like VMware servers.

The only difference is that TrueSight Network Automation does not configure the rogue hypervisor switch during container provisioning (as TrueSight Network Automation does not have a device adapter for it) and the administrator must do it manually for each container. The rogue switch nodes encapsulate a null device reference.

The API methods to acquire miscellaneous rogue addresses are not meant for this use case and are purely for customization of unexpected use cases.

Back to top

Acquiring physical server NIC addresses

In this case, BMC Cloud Lifecycle Management calls TrueSight Network Automation during provisioning of a physical server NIC to acquire an address for it. BMC Cloud Lifecycle Management passes in the container name, server name, NIC name, switch selection, and port name desired.

TrueSight Network Automation acquires the next available address from IPAM in the address pool associated with the specified NIC segment.

The fact that the server NIC is assigned the acquired address (if any) and is attached to the specified physical switch is recorded in the TrueSight Network Automation database. Information about the address is passed back to BMC Cloud Lifecycle Management. Information about the address includes its value and the value of the gateway address and subnet mask of the pool the address came from.

BMC Cloud Lifecycle Management next makes a separate call on TrueSight Network Automation to connect the port to the NIC segment, passing in container name, server name, NIC name, switch selector, and port name. TrueSight Network Automation responds by executing a job with custom action (as specified in the container blueprint) to configure the port on the switch to associate it with the VLAN of the NIC segment.

Back to top

Acquiring NATed server or LB pool VIP addresses

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to acquire public address for a private address. BMC Cloud Lifecycle Management passes in the container name and private address (VM/VIP/Physical Server address). TrueSight Network Automation acquires a public address and adds a NAT entry to the container node (which supports NAT) for this public-private address pair. The task of adding the NAT entry involves executing a job to update the configuration of the device encapsulated by the node in question, using a custom action specified in the container blueprint. To acquire the public address, TrueSight Network Automation uses the address pool specified as the NAT pool in the private address's address pool. Information about the address is passed back to BMC Cloud Lifecycle Management. Information about the address includes its value and the value of the gateway address and subnet mask of the pool the address came from.

Back to top

Replacing all firewall rules in a VFW ACL

In this use case, the container administrator interacts with BMC Cloud Lifecycle Management to modify the list of entries in the ACL of a VFW, referred to as firewall rules (that is, low level rules). Both the inbound and outbound ACL on each interface defined within the VFW can be managed. A rule consists of a source address (host or network), destination address (host or network), destination port, transport protocol (for example, six for tcp), and a flag indicating whether it is a permit or deny rule.

TrueSight Network Automation maintains an ordered list in its database of the existing rules in a VFW, as previously specified in BMC Cloud Lifecycle Management. It does not assemble this list by parsing the contents of the VFWs latest running configuration. Consequently any changes made to the rules outside of BMC Cloud Lifecycle Management are not reflected in the TrueSight Network Automation database, and is overwritten the next time changes are made to the rules by BMC Cloud Lifecycle Management.

After the administrator modifies the contents of an ACL from BMC Cloud Lifecycle Management UI, BMC Cloud Lifecycle Management sends down the modified list to TrueSight Network Automation. TrueSight Network Automation responds by verifying that the size of the new list does not exceed the maximum size defined in the pod, if any. It then executes a job to perform a smart merge to the VFW to replace all of the old rules with the new rules.

Note that TrueSight Network Automation currently sorts the rules before it sends them to the device, to guarantee that more specific rules (for example, rules allowing traffic to a particular destination host) appear before more general rules (for example, rules denying traffic to an entire destination network). Refer to the appendix for details of the sorting algorithm used.

Back to top

Adding path rules

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to add path rules between security endpoints (source and destination) in a container. A path rule is a high level firewall rule (that is, an application path, or BMC Cloud Lifecycle Management network path) that regulates network traffic flow between two endpoints by either permitting or denying the flow. The security endpoint could be a specific host address, specific network (network address and network mask) or an internal/external network. BMC Cloud Lifecycle Management passes in the container name and the path rule information (source endpoint, destination endpoint, transport protocol, port number and option to permit or deny traffic). TrueSight Network Automation takes the path rule information and determines which routing path (that is, a TrueSight Network Automation network path) it lies within, and then determines which managed ACLs lie along that routing path, if any. Updates are made to each such ACL. Information about the ACLs which are updated is passed back to BMC Cloud Lifecycle Management.

Back to top

Removing path rules

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to remove a path rule between security endpoints (source and destination) in a container. BMC Cloud Lifecycle Management passes in the container name and the path rule information (source endpoint, destination endpoint, transport protocol, port number and option to permit or deny traffic). TrueSight Network Automation takes the path rule information and determines which routing path it lies within, and then determining which managed ACLs lie along that routing path, if any. Updates are made to each such ACL. Information about the ACLs which are updated is passed back to BMC Cloud Lifecycle Management.

Back to top

Adding LB pool to VLB

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to add a new LB pool to a VLB. This can be done automatically as part of the sequence of provisioning a new application which requires load balancing, or it can be done ad hoc by a container administrator.

During this sequence, BMC Cloud Lifecycle Management passes the new LB pool parameters to TrueSight Network Automation for the VLB in a specified container. BMC Cloud Lifecycle Management also passes in information to select the VLB and the LB pool type within it to use. TrueSight Network Automation responds by verifying that the addition of a new LB pool does not exceed the maximum number of LB pools defined in the pod, if any. It then executes a custom action job to create the new LB pool in the VLB.

The LB pool parameters consist of a client facing port value (for example, 80 for an LB pool serving web clients), a transport protocol value (for example, six for tcp), a VIP (virtual IP address) name, plus a set of additional options specified as name-value pairs. If the VIP name corresponds to the name of the VIP of an existing LB pool attached to the same network, that existing VIP address is reused for the new LB pool (assuming the two LB pools have different client ports). If not then a new VIP address is acquired, via the next available address in the address pool of the VIP segment associated with the LB pool type being used.

In addition to acquiring a VIP address, TrueSight Network Automation may acquire a block of addresses to use for source network address translation (SNAT) for the packets being processed. Use of SNAT allows one to translate the source address of packets sent by clients into addresses which can be more easily routed within the container. Whether or not to use an SNAT block is specified in the container blueprint, along with which address pool to acquire the block from, the size of the block to use and the size of the pool to use for block ids.

The default custom action configures one hour sticky session timeouts for the LB pool. It also uses port address translation (PAT) for SNAT blocks which are used. It uses device defaults for all other options (for example, load balancing algorithm). A non-default custom action to create the LB pool can be specified in the container blueprint used to create the container if the behavior of the default custom action is unsatisfactory.

Also it defines and configures a TCP or UDP heartbeat probe for the LB pool based on transport protocol value.  This means that if a server within the pool goes down, then VLB knows it has gone down and not send work to that server.

Note that if the custom action job fails, no resources (VIP address, SNAT block addresses, or SNAT block id) are consumed.

TrueSight Network Automation records in its database that the LB pool has been added to the VLB. The VIP address used for the LB pool is returned to BMC Cloud Lifecycle Management, which acquires a hostname for it in DNS.

Back to top

Adding server NICs to LB pool in VLB

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to add server NICs to an LB pool in a VLB. This can be done automatically as part of the sequence of provisioning a new server, or it can be done ad hoc by a container administrator.

During this sequence, BMC Cloud Lifecycle Management passes in a container name, a VLB selection and the name of the LB pool within it, and information about the entries to add to the pool. The entry information includes the name of the server, the address of the NIC, and the server facing port value (for example, 80 for a server running a web server listening on the standard port, or 8080 if it is listening on a non standard port). TrueSight Network Automation responds by verifying that the addition of the entries does not exceed the maximum number of allowed entries per LB pool defined in the pod, if any. It then executes a custom action job to add the new entry to the LB pool in the VLB.

TrueSight Network Automation records in its database that the entry has been added to the LB pool.

Back to top

Removing server NICs from LB pool in VLB

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to remove server NICs from an LB pool in a VLB. This can be done automatically as part of the sequence of decommissioning a server, or it can be done ad hoc by a container administrator.

During this sequence, BMC Cloud Lifecycle Management specifies entries within a particular LB pool in a particular VLB within a particular container which should be removed. TrueSight Network Automation responds by executing a custom action job to remove the entries from the LB pool in the VLB.

TrueSight Network Automation records in its database that the entry has been removed from the LB pool.

Back to top

Removing LB pool from VLB

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to remove an LB pool from a VLB. This can be done automatically as part of the sequence of decommissioning a load balanced application, or ad hoc by a container administrator.

During this sequence, BMC Cloud Lifecycle Management specifies a particular LB pool to remove from the VLB within a specified container. TrueSight Network Automation responds by executing a custom action job to remove that LB pool in the VLB. If the job succeeds, then the resources associated with the LB pool (VIP address, SNAT block addresses, and SNAT block id), are released. Deleting an LB pool deletes all of the entries within that LB pool in the VLB.

TrueSight Network Automation records in its database that the LB pool has been removed from the VLB.

Back to top

Releasing VMware server NIC addresses

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to release the address of a Vmware server NIC. This is done automatically as part of the sequence of decommissioning a server.

During this sequence, BMC Cloud Lifecycle Management specifies a particular server NIC to remove from a specified container. TrueSight Network Automation responds by releasing the NICs address back to IPAM and removing the virtual port associated with that NIC in the hypervisor switch. BMC Cloud Lifecycle Management is expected to then remove the VM NIC from Vcenter, via BMC Server Automation, and remove the associated entry from DNS. BMC Cloud Lifecycle Management is also expected to execute the use cases to remove associated firewall rules from the VFW, and remove associated entries from LB pools in the VLB afterwards.

TrueSight Network Automation records in its database that the NIC is no longer attached to a hypervisor switch in the container. TrueSight Network Automation does not communicate with any network devices in this sequence.

Back to top

Releasing Citrix server NIC addresses

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to release the address of a Citrix server NIC. This is done automatically as part of the sequence of decommissioning a server. This is similar to use case for Release VMware Server NIC address.

Back to top

Releasing physical server NIC addresses

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to release the address of a physical server NIC. This is done automatically as part of the sequence of decommissioning a server.

During this sequence, BMC Cloud Lifecycle Management specifies a particular server NIC to remove from a specified container. TrueSight Network Automation responds by releasing the NICs address (if any) back to IPAM and removing the physical port associated with that NIC in the physical switch. BMC Cloud Lifecycle Management is expected to remove the associated entry from DNS. BMC Cloud Lifecycle Management is also expected to execute the use cases to remove associated firewall rules from the VFW, and remove associated entries from LB pools in the VLB afterwards.

TrueSight Network Automation records in its database that the NIC is no longer attached to physical switch in the container. TrueSight Network Automation does not communicate with any network devices in this sequence.

Back to top

Releasing NATed server or LB pool VIP addresses

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to release the public address for a private address. BMC Cloud Lifecycle Management passes in the container name and public address. TrueSight Network Automation removes the NAT entry from the device for this public address and releases this address back to the address pool it was acquired from. The task of removing the NAT entry involves executing a job to update the configuration of the device encapsulated by the node in question, using a custom action specified in the container blueprint.

Back to top

Modifying network containers

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to toggle dynamic components (NIC segments and VLBs) and specify boundaries of addressing resources (address spaces and address pools) acquired in the process in a container. BMC Cloud Lifecycle Management passes in the container name, dynamic component overrides, dynamic addressing overrides, runtime parameters, and flags to control error handling behavior.

TrueSight Network Automation takes the list of dynamic component overrides and dynamic addressing overrides and integrates them into a temporary container blueprint copy. It uses the temporary blueprint to update the state of dynamic components within the container.

Old resources are released and new resources acquired for the container as needed, based on whether conditions defined for them newly evaluate to false (for a resource no longer needed) or true (for a resource which is now needed). The resources include addresses and VLANs, as well as guest nodes.
Actions are executed to reconfigure devices in the container as needed to consume these resource changes, based on whether conditions defined for configure actions newly evaluate to false (for a configure action which you need to unconfigure) or true (for a configure action which is now needed).

If the operation is successful, an updated representation of the container is returned to BMC Cloud Lifecycle Management, which it uses to update the cloud database.

If the operation fails, TrueSight Network Automation automatically kicks off an operation to unmodify the container, in order to return it to its original state, unless it has been flagged not to do so by the caller. If the unmodify operation fails, the container administrator needs to troubleshoot the problem and retry the operation in order to return the container to a reliable state.

Back to top

Deprovisioning network containers

In this use case, the container administrator selects a container to decommission. The administrator also specifies a flag indicating whether TrueSight Network Automation should delete the container even if there are errors unconfiguring it. The administrator does not specify new runtime parameter to use in this process. Instead the runtime parameters used to initially configure the container are reused.

The first step performed by TrueSight Network Automation is to execute a job to unconfigure the devices in the container. The job is created on the fly using a sequence of actions specified in the container blueprint. These actions typically involve merging a template to a device. These template can use the same ${runtime.*}, ${container.*}, and ${pod.*} substitution parameter references which were supported by the templates used to configure the container.

The actions in the unconfigure job execute in sequence. If the job succeeds, TrueSight Network Automation proceeds to the next step of deleting the container. If any action fails, the job is marked as failed but it continues to attempt all the remaining actions. If the job fails, and BMC Cloud Lifecycle Management has requested for us to not delete the container after failures, the container is not deleted. The container administrator can then retry the decommission attempt from BMC Cloud Lifecycle Management, after he troubleshoots why the unconfigure job failed in TrueSight Network Automation. If the job fails, and BMC Cloud Lifecycle Management has requested for us to delete the container after failures, the container is deleted.

In the second step performed by TrueSight Network Automation, the container is deleted, unless this step is being skipped due to a failure in the first step. As part of this step it releases all resources back to the pod and IPAM. If there are still server NICs attached to the container, their addresses are released in this process. It is the responsibility of BMC Cloud Lifecycle Management to delete those servers afterwards.

If the attempt to unconfigure and delete the container succeeds, or if the unconfigure fails but TrueSight Network Automation was told to delete it anyway, the container is deleted from the cloud database. If the container is deleted from the cloud database, but it still present in TrueSight Network Automation, you can remove it from TrueSight Network Automation by retrying the deprovision operation from the TrueSight Network Automation CLI, after fixing whatever caused it to fail initially.

Back to top

Reprovisioning network containers

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to reprovision a network container. Reprovisioning a network container means adding things (for example, a new NIC segment) to the network container, which were added to its blueprint after it was originally provisioned. BMC Cloud Lifecycle Management passes in the network container name, network container blueprint name (with revisions), runtime parameters, and flags to control error handling behavior.

TrueSight Network Automation updates the network container structure based on the revised network container blueprint. If no errors occur while the network container is updated, the passed-in runtime parameters are added to the network container. TrueSight Network Automation then executes the unconfigure reprovision actions based on the source and target revision numbers. Old resources are released and new resources acquired for the network container as needed, based on whether conditions defined for them newly evaluate to false (for a resource no longer needed) or true (for a resource which is now needed). The resources include infrastructure resources, such as VLANs and addresses. Actions are executed to reconfigure devices using configure reprovision actions in the container as needed to consume these resource changes, based on source and target revision numbers defined for the action info in the container blueprint.

If the operation is successful, an updated representation of the container is returned to the BMC Cloud Lifecycle Management, which it uses to update the cloud database.

If the operation fails during unconfigure stage or configure stage appropriate error message is thrown to the caller if the ignoreErrors flag is set to false. If the operation fails during other stages for example, update container structure, acquire resources or release resources appropriate error is returned back to the caller irrespective of the ignoreErrors flag. If the failure is an ignorable failure, and ignoreErrors is true the container is left in fully reprovisioned state. Otherwise the container is left in partially reprovisioned state. In such cases the container admin needs to troubleshoot the problem and retry the reprovision operation.

Back to top

Scaling NAT pools

In this use case, BMC Cloud Lifecycle Management calls TrueSight Network Automation to reprovision a network container to add a new NAT pool to a pool chain. Prior to attempting a reprovision operation, the new revision of the network container blueprint that has additional NAT pools added to a pool chain is onboarded into BMC Cloud Lifecycle Management. BMC Cloud Lifecycle Management then passes in the network container name, network container blueprint name (with revisions) and addressing overrides as reprovision arguments.

TrueSight Network Automation adds the new NAT pool to the network container structure based on the revised network container blueprint, and attempts to acquire the new NAT pool. If the operation is successful, an updated representation of the container is returned to BMC Cloud Lifecycle Management, which it uses to update the cloud database.

Conversely, if the container already has NAT pools in a chain in an unacquired state, BMC Cloud Lifecycle Management uses a modify operation, passing in the network container name and addressing overrides to acquire the new NAT pools in the chain.

Back to top

Allowing addresses to be excluded on a per subnet basis

In this use case, consider three tenants that are all using the 192.168.0.0/24 subnet.

  • Tenant #1 has a few IPs (192.168.0.1 - 192.168.0.5) already in use. 
  • Tenant #2 and Tenant #3 can use the whole subnet. 

Global address exclusion on 192.168.0.1 - 192.168.0.5, affects all the three tenants even thought Tenant #2 and Tenant #3 do not want to exclude 192.168.0.1 - 192.168.0.5.

To overcome this limitation, while excluding addresses from the subnet, specify the exclusion scope. When a subnet is acquired from an IPAM system, TrueSight Network Automation passes the name of the pod or the container as the exclusion scope for that subnet. Addresses with the same scope must not overlap.

  • If the exclusion scope is null or empty, addresses are excluded globally. In other words, the specified addresses are excluded for use within all pods and containers.
  • If the exclusion scope has a non-null value, the specified addresses are excluded only from those subnets that have a matching exclusion scope as described below:
    • If the exclusion scope value is the same as the pod name, the address is excluded for use within the pod address ranges and pod address pools owned by the specified pod.
    • If the exclusion scope value is the same as the container name, the address is excluded for use within the address spaces owned by the specified container.

Back to top

Adding a customer network by using an existing pod-level VLAN

  1. Create a VLAN in TrueSight Network Automation by editing the pod.
    For details about how to edit the pod resources, see    Editing a pod .
  2. Create or modify an existing container blueprint to create a new container-level network.  

    1. Add an address pool for the new network segment by using <addressPoolBlueprint> as shown in the following code snippet: .

      Sample container blueprint to add an address pool for the new network segment.
    2. Create a NIC segment by using <nicSegmentBlueprint> as shown in the following code snippet:

      Sample container blueprint in which the NIC segment uses a pod-level VLAN.

      Note

      Note that <vlanName> uses the VLAN that is created in the pod as shown in step 1, rather than the one that is created in the container.

    3. Add a port type for the NIC segment in the required node by using <portTypeBlueprint> as shown in the following code snippet:

      Sample container blueprint to add a port type for the NIC segment.
    4. Add Configure and Unconfigure actions blueprints by using <configureActionInfoBlueprints> and <unconfigureActionInfoBlueprints> as shown in the following code snippet:

      Sample container blueprint to add configure and unconfigure action info blueprints.
  3. Ensure that the following sample templates that are referred to by the Configure and Unconfigure actions are present in TrueSight Network Automation:

    Sample Configure Action and Unconfigure Action templates.

    For details about copying and creating a template, see  Viewing the templates listing and  Adding a template respectively.

    Note

    The blueprint templates for the Configure and Unconfigure actions must refer to the pod VLAN substitution parameters. In other words, the new substitution parameters must use ${pod.vlans[<vlan-name>]} instead of ${container.vlans[<vlan-name>]}.

  4. Import the container blueprint into TrueSight Network Automation with a different version number by using the container blueprint import utility script .
  5. Import the container blueprint in BMC Cloud Life Cycle Management.
  6. Reprovision or create the container in BMC Cloud Life Cycle Management.

Back to top

Controlling DNS registration on a per NIC basis from the BMC Cloud Lifecycle Management GUI

In BMC Cloud Lifecycle Management 4.1, by default, all the NICs were registered into the DNS by setting performDnsOperation , a global property in TrueSight Network Automation.

Note

During an upgrade from version 4.1 to 4.5, the value of the performDnsOperation property is not modified in TrueSight Network Automation.

In BMC Cloud Lifecycle Management, the behavior of the DNS Registration Required check box is as follows:

  • On Service Designer > NIC Configuration , the DNS Registration Required check box is selected by default for the NICs that have static IP assignment.
  • On Service Catalog > Options Editor > Request Definition > Service Deployment Definition > Network Resources > Additional NIC and Service Catalog > Options Editor > Post-Deploy Action > Additional NIC , the DNS Registration Required check box is "not" selected by default for the NICs that have static IP assignment . Therefore, you must modify the Additional NIC Options Configuration and select the DNS Registration Required check box according to your environment.

Starting with version 4.5, to register the new NIC into the DNS server after SOI provisioning, perform the following steps:

  1. Ensure that the DNS setup is ready.
    For details about setting up DNS in TrueSight Network Automation, see Defining a DNS server in TrueSight Network Automation and for details about setting up DNS in BMC Cloud Lifecycle Management  Implementing integrated DNS, DHCP and IP address management .

  2. In BMC Cloud Lifecycle Management, set DNS Registration Required by using any of the following paths:
    1. Service Designer > NIC Configuration

       

    2. Service Catalog > Options Editor

      1. Day 1 (DRO) Additional NIC: Request Definition > Service Deployment Definition > Network Resources > Additional NIC

      2. Day 2 (TRO) Additional NIC: Post-Deploy Action > Additional NIC  

The following table depicts the various supported combinations for DNS registration:

DNS Registration RequiredPrivate IPPublic IPStatic IPDynamic IPSupported
SelectedSelectedNot selectedSelectedNot selectedYes
SelectedSelectedNot selectedNot selectedSelectedNo
SelectedNot selectedSelectedSelectedNot selectedNo

Back to top

Adding a pod-level management network to an existing pod

The cloud administrator can add pod-level management networks to existing pods for the following use cases:

  • Add more capacity to an existing pod-level management network by adding a chained address pool
  • Add a new management network when adding redundant service nodes

The following network diagram represents a use case to add three new networks and associate them to two existing nodes, and also add two new nodes to an existing pod:

The following table depicts the various scenarios, their corresponding representation in the preceding network diagram, and the high-level steps that you must follow by using a dedicated management switch and static IP address assignment:

 

Scenario and representation in the network diagramHigh-level steps
Initial layout of the pod

Not applicable

Add a new network, Management2, to the following existing nodes in the pod by using a vSwitch:

  • ESX1 - vSwitch0
  • ESX2 - vSwitch0

  1. Add a new network, Management2, in TrueSight Network Automation by editing an existing pod.
    For details about how to edit the pod resources, see  Editing a pod .
  2. Add the new Management2 network to the existing management switches (ESX1 - vSwitch0 and ESX2 - vSwitch0) in TrueSight Network Automation.
  3. Synchronize the pod in BMC Cloud Lifecycle Management.

Note: The preceding steps add the new network for the pod in TrueSight Network Automation and the CloudDB. However, this network is not usable via existing containers. Therefore, create a new network container to use the new management network.

Add a new network, Management3, to the following new nodes that are present in the cluster, Cluster1, but not in the pod by using a vSwitch:

  • ESX1 - vSwitch2
  • ESX2 - vSwitch2

  1. Add a new network, Management3, in TrueSight Network Automation by editing an existing pod.
    For details about how to edit the pod resources, see  Editing a pod .
  2. Add the two new nodes (ESX1 - vSwitch2 and ESX2 - vSwitch2) and associate the new management network to these nodes.
  3. Synchronize the pod in BMC Cloud Lifecycle Management.
  4. Perform a virtual center (VC) sync in BMC Cloud Lifecycle Management so as to store the new nodes into the Cloud database.

Note: The preceding steps add the new network for the pod in TrueSight Network Automation and the CloudDB. However, this network is not usable via existing containers. Therefore, create a new network container to use the new management network.

Add a new network, Management4, to existing nodes (ESX1 - vSwitch0 and ESX2 - vSwitch0) in the pod, and also add a new node, ESX1 - vSwitch0, from a new cluster, Cluster2.

  1. Add a new network, Management4, in TrueSight Network Automation by editing an existing pod.
    For details about how to edit the pod resources, see    Editing a pod .

  2. Add the new Management4 network to the existing management switches (ESX1 - vSwitch0 and ESX2 - vSwitch0) in TrueSight Network Automation.
  3. Add the new node (ESX1 - vSwitch0) from the new cluster, Cluster2, and associate the new management network to this node.

  4. Synchronize the pod in BMC Cloud Lifecycle Management.
  5. Perform a virtual center (VC) sync in BMC Cloud Lifecycle Management to store the new nodes into the Cloud database.
  6. Onboard the new cluster onto the pod.

Note: The preceding steps add the new network for the pod in TrueSight Network Automation and the CloudDB. However, this network is not usable via existing containers. Therefore, create a new network container to use the new management network.

Splitting VLAN pools and modifying the excluded VLANs from the VLAN pool

Consider that your fabric interconnect device supports only 1024 VLANs, and you need to raise the limit to 2048 to deliver additional network containers. If you create a second fabric interconnect device, the VIDs provisioned on the first device cannot be reused into the second device. If you use the same pool as before, a VID used into the first device might be freed, and then reused into the second device. However, this is not allowed because it disrupts the distribution layer.

Owing to these restrictions, you need to split your VLAN pool into two different pools—the first one is used on the old device and the second one on the new device.

See Editing a pod to view how to modify VLAN pools and excluded IP addresses in the VLAN pool.

Back to top

Was this page helpful? Yes No Submitting... Thank you

Comments