Deploying BMC Helix IT Operations Management in an Amazon Elastic Kubernetes Service cluster


You can install BMC Helix IT Operations Management in an  Amazon Elastic Kubernetes Service (EKS) cluster if you use EKS to manage your Kubernetes platform.

Important

BMC provides general guidelines to install BMC Helix IT Operations Management in an Amazon Elastic Kubernetes Service cluster based on the following reference architecture used by BMC. You can choose any alternative architecture or installation options on this platform. However, BMC does not provide support for alternative options.


Reference installation architecture

The following image shows the reference logical architecture used by BMC to install BMC Helix IT Operations Management in an EKS cluster:

image-2024-1-24_15-14-18-1.png


The following AWS services are used:

  • AWS Certificate Manager (ACM)—Handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys.
  • Simple Storage Service (S3)—Is used to upload files to AWS. This is an object storage service and provides scalability, data availability, security, and performance.
  • Route53— Is a highly available and scalable Domain Name System (DNS) web service.


Before you begin


Process to install BMC Helix IT Operations Management in an EKS cluster

The following image provides an overview of the BMC Helix IT Operations Management installation in an EKS cluster:

EKSInstall.png


The following table lists the tasks to install BMC Helix IT Operations Management in an EKS cluster:


To install and configure Kubernetes Ingress Ngnix Controller

The Ingress Controller is a load balancer for your cluster 

To install and configure Kubernetes Ingress Controller, perform the following tasks:


To create an Ingress Ngnix Controller instance

  1. Install Helm 3.2.3 by using the following command:

    curl -O https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz
    tar -xzf helm-v3.2.3-linux-amd64.tar.gz
    sudo cp ./linux-amd64/helm /usr/local/bin/
  2. Install Kubectl by using the following command:

    curl -LO https://dl.k8s.io/release/v1.21.10/bin/linux/amd64/kubectl
    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    chmod +x kubectl
    sudo mv ./kubectl /usr/bin
  3. Ensure that the docker client is installed.

To install Kubernetes NGINX Ingress Controller

  1. You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller.
    Based on the version of your Kubernetes, run one of the following commands to get the 
    deploy.yaml file for the NGINX Ingress Controller:
    • To get deploy.yaml file for Nginx Ingress Controller version 1.7.0:
      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/cloud/deploy.yaml
    • To get deploy.yaml file for Nginx Ingress Controller version 1.8.1:
      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
    • To get deploy.yaml file for Nginx Ingress Controller version 1.9.3:
      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.3/deploy/static/provider/cloud/deploy.yaml
      ImportantIf you are not able to download Nginx Ingress Controller version 1.9.3, use version 1.9.5 or 1.9.6.
    • To get deploy.yaml file for Nginx Ingress Controller version 1.9.5:
      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/cloud/deploy.yaml
    • To get deploy.yaml file for Nginx Ingress Controller version 1.9.6:
      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.6/deploy/static/provider/cloud/deploy.yaml
  2. Update the virtual private cloud (VPC)  Classless Inter-Domain Routing (CIDR) details by editing the deploy.yaml file as shown in the following example:

    apiVersion: v1
    data:
    http-snippet: |
    server {
    listen 2443;
    return 308 https://$host$request_uri;
    }
    proxy-real-ip-cidr: 192.168.0.0/16
    use-forwarded-headers: "true"
  3. Update the AWS Certificate Manager (ACM) ID as shown in the following example:

    metadata:
    annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:xxxxxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
  4. Under the Service object, replace the annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb with service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http as shown in the following example:

    kind: Service
    metadata:
      annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
  5. To apply the changes that you made in the deploy.yaml file, run the following command:

    kubectl apply -f deploy.yaml

     
    The NGINX Ingress Controller is deployed in the ingress-nginx namespace and an external classic load balancer with TLS termination is created in AWS.

  6. To get the IP address of the load balancer, run the following command:

    kubectl get svc -n ingress-nginx

    Example command output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    ingress-nginx-controller LoadBalancer 10.100.180.188 xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com 80:31245/TCP,443:31285/TCP 6m4s
    ingress-nginx-controller-admission ClusterIP 10.100.182.96
  7. Make sure you add the following parameters in the Ingress Controller ConfigMap under the data section:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: internet-ingress-configuration
     namespace: internet-ingress
      labels:
        app.kubernetes.io/name: internet-ingress
        app.kubernetes.io/part-of: internet-ingress
    data:
      use-proxy-protocol: "false"
      proxy-add-original-uri-header: "true"
      proxy-real-ip-cidr: 172.xx.xxxx.0/24
      proxy-body-size: "250m"
      force-ssl-redirect: "false"
      ssl-redirect: "false"
      server-name-hash-bucket-size: "512"
      use-forwarded-headers: "true"
      server-tokens: "false"
      http-snippet: |
        server {
          listen 8080;
          server_tokens off;
       }

To configure a DNS record for your domain

Configure a DNS record for your domain so that you can access the applications by using URLs.

  1. Navigate to your domain-hosted zone.
  2. Create a DNS A type record for the domain to resolve URLs to the load balancer as shown in the following example:

    Record Name - *.helixonprem.com
    Type - A
    Value/Route traffic to
    - Alias to Application and Classic LoadBalancer
    - Select the region - us-east-2
    - Select the Classic LB - xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com

To configure the load balancer for the cluster

Configure the load balancer listener specifications by using the following steps:

  1. In the AWS console, select the load balancer created by Ingress Controller.
  2. Navigate to the Listeners tab.
  3. Make a note of the Instance Port value configured for the HTTPS listener.
  4. On the Listeners tab, click Edit.
  5. Update the Load Balancer Protocol value from HTTPS to SSL (Secure TCP).
  6. Make sure that instance port has the same value that you noted in step 3.

To enable the proxy protocol in the load balancer

Enable the proxy protocol in the classic load balancer to forward X-Forwarded-* headers.

  1. Find the instance port value by using the following command:

    aws elb describe-load-balancers --load-balancer-name Load Balancer name

    Example command output:

    aws elb describe-load-balancers --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx "Policies": {
    "AppCookieStickinessPolicies": [],
    "LBCookieStickinessPolicies": [],
    "OtherPolicies": [
    "ProxyProtocol-policy-1",
    "ELBSecurityPolicy-2016-08"
    ]
    },
    "BackendServerDescriptions": [
    {
    "InstancePort": <Port value configured for the HTTPS listener>,
    "PolicyNames": [
    "ProxyProtocol-policy-1"
    ]
    }
  2. Create a policy that enables the proxy protocol.

    aws elb create-load-balancer-policy \
    --load-balancer-name Load Balancer name \
    --policy-name Proxy Protocol policy name \
    --policy-type-name Type of Proxy Protocol Policy \
    --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true

    aws elb set-load-balancer-policies-for-backend-server \
    --load-balancer-name  Load Balancer name \
    --instance-port Port number \
    --policy-names Proxy Protocol policy name

    Example:

    aws elb create-load-balancer-policy \
    --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
    --policy-name ProxyProtocol-policy-1 \
    --policy-type-name ProxyProtocolPolicyType \
    --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true

    aws elb set-load-balancer-policies-for-backend-server \
    --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
    --instance-port xxxxx \
    --policy-names ProxyProtocol-policy-1
  3. In the ingress-nginx namespace, in the ingress-nginx-controller configmap, enable the proxy protocol by using the following command:

    kubectl edit cm ingress-nginx-controller -o yaml -n ingress-nginx

    Example command output:

    apiVersion: v1
    data:
    enable-underscores-in-headers: "true"
    http-snippet: |
    server {
    listen 2443;
    return 308 https://$host$request_uri;
    }
    proxy-real-ip-cidr: 192.168.0.0/16
    server-name-hash-bucket-size: "1024"
    use-forwarded-headers: "true"
    use-proxy-protocol: "true"
    kind: ConfigMap


To configure the virtual memory parameter for Elasticsearch

For all worker nodes in your Amazon EKS cluster, set the  sysctl -w vm.max_map_count  parameter to  262144  before installing BMC Helix Platform services.

  1. In your Amazon EKS cluster, connect to the worker node through  Secure Shell  (SSH).
  2. Run the following commands on the worker node:

    sysctl -w vm.max_map_count=262144
    echo vm.max_map_count=262144 > /etc/sysctl.d/es-custom.conf

Some content is unavailable due to permissions.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*