Unsupported content

 

This version of the product has reached end of support. The documentation is available for your convenience. However, you must be logged in to access it. You will not be able to leave comments.

Fault host affinity

Host affinity refers the practice of keeping virtual firewalls (VFWs) and virtual load balancers (VLBs) that are in the same container on the same host.

This information applies only to Cisco Application Control Engine (ACE) Module and Firewall Services Module (FWSM).

BMC Network Automation supports creating a VFW or VLB during container creation on a host that belongs to same chassis. This scenario is applicable only in active-active fault tolerance mode and not applicable in active-standby fault tolerance. 

For example:

When a tenant creates a container in BMC Network Automation, VFWs and VLBs across all zones are created on the same chassis.

The load balancer host active admin and firewall community 1 host resides on same computer.

If active admin switches over to a different host, for example, load balancer host active admin switches over to chassis 2 then it may not be possible to guarantee that VFWs and VLBs are created on same computer. You currently implement this by alternating computer based on container ID.

Firewall example

Community = (containerId % 2) + 1;

Load Balancer example

if containerId is even
    use Load Balancer Host 1
    use active guest priority
else
    use Load Balancer Host 2
    use standby guest priority
 

Examples

The following sections provide examples of fault host affinity.

Example devices

Firewall Host 1 (community 1 and Admin)
Firewall Host 2 (community 2)
Load Balancer Host 1 (Admin)
Load Balancer Host 2

Example firewall

To create a container

Container 1 (id = 1) : This will create VFW on Firewall Host 2 (community 2) and VLB on Load Balancer Host 2
Container 2 (id = 2) : This will create VFW on Firewall Host 1 (community 1) and VLB on Load Balancer Host 1
Container 3 (id = 3) : This will create VFW on Firewall Host 2 (community 2) and VLB on Load Balancer Host 2
Container 4 (id = 4) : This will create VFW on Firewall Host 1 (community 1) and VLB on Load Balancer Host 1
etc.

This will achieve close to perfect balancing on the firewall since you alternate between community 1 and community 2.

Example load balancer

You always toggle peer priority irrespective of whether both hosts are up so that if a host comes back up later it will automatically take over the peer VLBs (ones configured in admin with higher peer priority).

The algorithm doesn't guarantee perfect balancing and may result in possible asymmetry described below.

Initially Load Balancer Host 1 and Load Balancer Host 2 both up, admin is active in Load Balancer Host 1.

To create a container

  1. Container (id = 1) VLB created (create template) in Load Balancer Host 1 with peer priority 110
    (VLB active in Load Balancer Host 2, wants to be active in Load Balancer Host 2)
  2. Container (id = 2) VLB created (create template) in Load Balancer Host 1 with peer priority 90
    (VLB active in Load Balancer Host 1, wants to be active in Load Balancer Host 1)
  3. Container (id = 3) VLB created (create template) in Load Balancer Host 1 with peer priority 110
    (VLB active in Load Balancer Host 2, wants to be active in Load Balancer Host 2)

    If Load Balancer Host 1 goes down, admin becomes active in Load Balancer Host 2, all VLBs becomes active in Load Balancer Host 2
  4. Container (id = 4) VLB created (create template) in Load Balancer Host 2 with peer priority 90
    (VLB active in Load Balancer Host 2, wants to be active in Load Balancer Host 2)
  5. Container (id = 5) VLB created (create template)in Load Balancer Host 2 with peer priority 110
    (VLB active in Load Balancer Host 2, wants to be active in Load Balancer Host 1)
  6. Container (id = 6) VLB created (create template) in Load Balancer Host 2 with peer priority 90
    (VLB active in Load Balancer Host 2, wants to be active in Load Balancer Host 2)

If Load Balancer Host 1 comes back up, admin becomes active in Load Balancer Host 1 again. VLBs that want to be active in Load Balancer Host 1 go to Load Balancer Host 1. At this point there are 4 VLBs active in Load Balancer Host 2 and 2 VLBs active in Load Balancer Host 1.

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Comments