Default language.

Load balancers in BMC Helix Innovation Suite


A load balancer is a service that distributes the traffic from multiple clients to several servers.

By using load balancers you can build a highly available  and application components. By building an infrastructure that scales with workload and provides uninterrupted service to your users, you can maximize the return on your  investment. 

In a containerized environment, load balancing is also carried out within the Kubernetes environment by the NGINX ingress controller.

How are requests routed with load balancers

When a request is sent from a browser, an external load balancer routes the request to the ingress port. The ingress controller environment makes this ingress port available on all worker nodes and routes a request to the node that is available.

There are multiple worker nodes that are present in the single ingress port. The request is directed to an available worker node or the ingress controller. Based on the rules that are defined in the ingress controller during installation, the request is accordingly routed to the service. The service then sends the request to an available pod that will perform the action as per the request. 

The following image gives an overview of how a request is routed through the external load balancer and the ingress controller:

LB_Concept.png

Types of load balancers

Load balancing occurs on following two levels:

External: When a user enters a URL on their browser, the request is sent from the browser to the external load balancer. The external load balancer then routes the request to the available worker node on the ingress port. 

An external load balancer makes sure that the traffic is routed to an available port that is made available by ingress controller on each node.

Internal: When the request reaches one of the worker nodes in the ingress port, the request is routed to the service. The service then internally load balances between the pods as per the user request.

Example

Apex Global has deployed  as a container in a Kubernetes cluster and uses F5 as the external load balancer.

An end user at Apex Global enters the URL to access the Digital Workplace service, ithelp-dwp.onbmc.com. The F5 load balancer directs the request to the node that is on the ingress port. Based on the defined rules, the request is routed through the NGINX ingress controller to the service, Digital Workplace.

Based on the example, the following graphic shows an overview of how a request is load balanced:

LB_example.png


The following load balancer SSL methods are supported:

  • SSL Offloading at the load balancer
  • SSL Passthrough to offload at the Ingress Controller
  • SSL Full Proxy
  • Allow X-Forwarded- Headers Upstream of Ingress
  • Reverse Proxy http back to https

For information about configuring load balancing, see System requirements.

Additional resources

For information about ingress controllers, see the Nginx website.

For information about Kubernetes, see the Kubernetes website.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*