Unsupported content This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Configuring F5 BIG-IP load balancers


This topic provides information on Pod and Container Management (PCM) changes and requirements to support the management of F5 BIG-IP load balancers using BMC Network Automation as part of a BMC Clould Lifecycle Management implementation.

In a BMC Cloud Lifecycle Management implementation, supporting a load balancer requires the device to support multi-tenency, so that each customer can use a dedicated load balancer. F5 BIG-IP load balancers do not support a virtual context concept, so there are no virtual devices created, but the physical load balancer is shared among containers. Multi-tenancy in a F5 BIG-IP load balancer is achieved by using route domain IDs. BMC Network Automation supports an F5 configuration with or without route domains.

As an example for this topic, use an environment with two vanilla nodes in an active/standby configuration and a single load balancer node, which points to the floating IP address of the physical F5 BIG-IP load balancer. For every container, this floating load balancer is used for load balancer actions such as adding pools, adding servers, virtual addressing, enabling, and disabling.

Guidelines for creating a pod blueprint

The following guidelines apply for creating a pod blueprint to use F5 BIG-IP load balancers.

Before you begin

There should be two F5 BIG-IP load balancers in Active/Standby mode. These two load balancers should be added in BMC Network Automation using their management or self IP addresses. An additional F5 BIG-IP load balancer should be added in BMC Network Automation using a floating IP address.

Requirements for a pod blueprint

The following guidelines apply for creating a pod blueprint to use F5 BIG-IP load balancers.

  • Pod blueprint should have two vanilla nodes, one for the active and another for the standby device. Both vanilla node devices should be added in BMC Network Automation using their management IP address or self non-floating IP addresses.

  • Each vanilla node in the pod blueprint should have one parameter blueprint, which takes an interface number while creating the pod. For example, 1.2. The interface that is connected to the layer 3 switch from where its getting customer network connectivity.

  • There is one load balancer node, which requires the F5 BIG-IP load balancer to be added in BMC Network Automation using a self floating IP address.

  • There is one integer pool route domain ID. Route domains can range from 0-65534. However, each route domain needs a unique virtual local-area network (VLAN). The number of route domains you can effectively deploy depends on platform and configuration objects in use per route domain. The range of the route domain should be decided by an administrator.

Back to top

Guest VLB details

As part of creating a container, a guest virtual load balancer (VLB) is created which points to a host device. It does not create a separate guest as that is not supported by the F5 device. The F5 BIG-IP load balancer device adapter has a tag, supportsConfigPerSecurityContext, defined as false. This tag indicates that the F5 device is not capable of creating a virtual guest with separate configuration files. BMC Network Automation internally checks this tag, and does not create a separate device, but instead points the guest VLB to the host device.

Back to top

Guidelines for creating a container blueprint

The following guidelines apply for creating a container blueprint to use F5 BIG-IP load balancers.

  • Depending upon the network architecture, the administrator needs to decide number of route domains to be used per container and the same number of integer blueprints should be defined
     in the container blueprint.

  • Two vanilla nodes are defined for the Active and Standby devices. Each vanilla node defines one addressBlueprint per customer network. An example configuration:

    <addressBlueprint>
    <addressName>C_SelfAddress1-1</addressName>
    <gatewayFlag>false</gatewayFlag>
    <virtualFlag>false</virtualFlag>
    <addressPoolName>C_VLAN1</addressPoolName>
    </addressBlueprint>
  • Both vanilla nodes have configure and unconfigure templates defined in the container blueprint.

  • After configuring self IP addresses, VLAN, and route domain IDs on both devices, you must perform a configuration synchronization. For configuration synchronization, BMC Network Automation has a custom action with a GUID of 1F81B0E0-A08C-4664-9783-CE3C5AC7F97F. The second vanilla node defined in the container blueprint for the standby node should have the following entry in configureActionInfoBlueprints (after the merge action info blueprint):

    <configureActionInfoBlueprint xsi:type="customActionInfoBlueprint" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <description>F5 CLM Configuration Sync</description>
    <guid>1F81B0E0-A08C-4664-9783-CE3C5AC7F97F</guid>
    <runtimeProps/>
    </configureActionInfoBlueprint>
  • In unconfigureActionInfoBlueprints, both nodes should have the above given configuration synchronization custom action definition after merge action info blueprint.

  • There is one load balancer node (that is, type="containerLoadBalancerHostBlueprint") defined in the container blueprint for a floating load balancer.

  • The load balancer node should have one guestAddressBlueprint defined in the container blueprint for the floating IP address configured on both active/standby per customer network.

  • Load balancer actions like add pool, add pool entries, delete pool entries, enable/disable pool entries, and delete pool are done via a separate group of custom actions, which are only used for BMC Cloud Lifecycle Management. The following GUID entries should be defined in the container blueprint:

    <addEntryGuid>B6355B9D-F8F1-413D-BDE6-2AE77C1CC12A</addEntryGuid>
    <addPoolGuid>B6355B9D-F8F1-413D-BDE6-2AE77C1D18EC</addPoolGuid>
    <disableEntryGuid>B6355B9D-F8F1-413D-BDE6-2AE77CCDDDDB</disableEntryGuid>
    <enableEntryGuid>B6355B9D-F8F1-413D-BDE6-2AE77C1CD14B</enableEntryGuid>
    <removeEntryGuid>B6355B9D-F8F1-413D-BDE6-2AE77C1CC13B</removeEntryGuid>
    <removePoolGuid>B6355B9D-F8F1-413D-BDE6-2AE77C1D10AB</removePoolGuid>

    Note

    When using the F5 BIG-IP Traffic Management Shell (TMSH) device in HA mode, you must first create a device group on both the devices, and the assign both the devices to that group.

    The following command creates a Test-Group group and adds the F5 BIG-IP device to that group:

    create cm device-group Test-Group devices add  { bcan-f5bigip-06.bmc.com }
  • For implementing multi-tenancy on load balancer pool entries, virtual IP addresses, VLAN, and self IP addresses, you need to associate each object with route domain ID. The container blueprint can acquire one or more route domain IDs in the form of an integer blueprint, for example, _C_VLAN_RD1. If the administrator wants to associate a route domain ID acquired by an integer blueprint for load balancer pool actions, a pool type blueprint of load balancer node poolTypeBlueprint should have routeDomainIds defined in a manner similar to the following example:

    <poolTypeBlueprints>
    <poolTypeBlueprint>
    <serverVlanName>C_VLAN1</serverVlanName>
    <snatBlockSize>10</snatBlockSize>
    <snatPoolName>C_VLAN1</snatPoolName>
    <vipPoolName>C_VLAN1</vipPoolName>
    <routeDomainIds>
    <routeDomainId>C_VLAN1_RD1</routeDomainId>
    </routeDomainIds>
    </poolTypeBlueprint>
    </poolTypeBlueprints>
  • Before importing a container blueprint, ensure that the custom actions for group F5 CLM load balancer Provisioning are in an enabled state. Navigate to the following in the BMC Network Automation UI: Admin > Device Adapter > Custom action > F5 CLM Load Balancer Provisioning.

Back to top

Pool naming

As you are sharing a physical load balancer among containers, every pool name added should have a unique name. It is possible that the same pool name can be supplied by BMC Cloud Lifecycle Management for adding a pool in different container VLB, so to make each pool name unique in the physical load balancer, the pool name is processed by BMC Network Automation before it is supplied to the device. This operation only occurs when the VLB is an F5 BIG-IP load balancer. The processed pool name is in following format: <Container-Name>.<VLB-Name>.<OriginalName>.

While adding pools, BMC Network Automation gets the original pool name from BMC Cloud Lifecycle Management, checking the device type of VLB and appending the container name and VLB name in original name and supplying it to the custom action. BMC Network Automation does not save the processed pool name in the BMC Network Automation database or in the container page, but it does save the original pool name supplied by BMC Cloud Lifecycle Management. BMC Network Automation does not expose the processed pool name to BMC Cloud Lifecycle Management.

Note

Regarding monitor: the add server to pool action checks whether the application port is 80 or 443 and add server node to pool with predefined monitor template for http or https. If an application port other than 80 or 443 is found, then BMC Network Automation adds a server node to the load balancer pool with monitor template tcp, which is a default template defined in the device. The F5 device does not have monitor templates for every application port. If you want the server to be added in a pool with some specific monitor template, you have to create those monitor templates in the device and you must modify the custom actions.

Back to top

Configuring Active/Standby nodes per container

The following configuration line creates a customer network VLAN and tags it to a load balancer interface with customer network connectivity:

b vlan vlan$\{container.vlans\[C_VLAN1\]\} tag $\{container.vlans\[C_VLAN1\]\} interfaces tagged $\{pod.node.params\[Interface\]\}

The following configuration line creates a route domain ID and associates a customer network VLAN to it:

b route domain $\{container.integers\[C_VLAN1_RD1\]\} description domain$\{container.integers\[C_VLAN1_RD1\]\} vlan vlan$\{container.vlans\[C_VLAN1\]\}

The following configuration line adds a self address from the customer network subnet, and associates a customer network VLAN and route domain acquired for that customer network to it:

b self $\{container.node.addresses\[C_SelfAddress2-1\]\}%$\{container.integers\[C_VLAN1_RD1\]\} vlan vlan$\{container.vlans\[C_VLAN1\]\} netmask $\{container.node.addresses\[C_SelfAddress2-1\].subnetMask\}

The following configuration line is creates a floating address which belongs to customer network subnet and associates it to a customer network VLAN and acquired route domain ID for the customer network.

b self $\{container.nodes\[VLB\].addresses\[C_Floating_IP1\]\}%$\{container.integers\[C_VLAN1_RD1\]\} vlan vlan$\{container.vlans\[C_VLAN1\]\} floating enable unit 2 netmask $\{container. Nodes\[VLB\].addresses\[C_Floating_IP1\].subnetMask\}

Note

Depending upon the network topology, you may be required to put a static route for customer network pointing towards VRRP/HSRP IP.

Limitations

The Active/Active mode is not supported.

Back to top

Sample pod and container blueprints

You can find sample pod and container blueprints and related templates in the BCAN_HOME\public\bmc\bca-networks\csm\samples\sampleWithF5BigPipeVlb directory on the BMC Network Automation application server. See Pod-model and Container-model for additional information on the sample pod and container blueprints for use with F5 BIG-IP load balancers.

Back to top

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*