Installing BMC Helix Service Management in an Amazon Elastic Kubernetes Service cluster


You can install BMC Helix Service Management in an Amazon Elastic Kubernetes Service (EKS) cluster if you use Amazon EKS to manage your Kubernetes platform.

Reference installation architecture

The following image shows the reference logical architecture used by BMC to install BMC Helix Service Management in an Amazon EKS cluster:

Important

BMC provides general guidelines to install BMC Helix Service Management in an Amazon Elastic Kubernetes Service cluster based on the following reference architecture used by BMC. Although you can choose any alternative architecture or installation options on this platform. BMC does not provide support for alternative options.

EKS architecture diagram.png

The following AWS services are used:

  • AWS Certificate Manager (ACM)—Handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys.
  • Simple Storage Service (S3)—Is used to upload files to AWS. This object storage service provides scalability, data availability, security, and performance.
  • Route53— Is a highly available and scalable Domain Name System (DNS) web service.

Before you begin

Important

AWS Single Sign-On (AWS SSO) is not supported.

Process to install BMC Helix Service Management in an Amazon EKS cluster

The following image provides an overview of BMC Helix Service Management installation in an Amazon EKS cluster:

Installation process in and EKS cluster.png

 

The following table lists the tasks to install BMC Helix Service Management in an Amazon EKS cluster:

Task

Action

Reference

1

Create and set up an Amazon EKS cluster

a

Create a Kubernetes cluster by using the AWS EKS service.

Important: BMC has certified using the default storage class gp3 from the Amazon Elastic Block Store (EBS)  available in the Amazon EKS cluster. You can use the default storage class or create your own Amazon EBS storage class.

Best Practice: We recommend that you use the AWS instance type m5 or m6 with CPU speed 3.5 GHz or higher.

b

Enable the Kubernetes Metrics Server for Horizontal Pod Autoscaling feature. 

c

Install and configure Kubernetes NGNIX Ingress Controller.

2

Set up a database

a

Set up an external database for BMC Helix Innovation Suite.

Important: Aurora PostgreSQL 13.x and 15.x database in AWS is supported. BMC has certified using the Aurora PostgreSQL 13.3 database in AWS and the db.r6g.2xlarge instance class.

If you use the Aurora PostgreSQL 13.3 database, the preferred Instance class for compact size deployment is db.r6g.2xlarge

Amazon Aurora supports PostgreSQL 13 in the AWS documentation

 

b

Create a database administrator user and specify the following permissions for the user:
PostgresPermissions.jpg

Important: Make sure that you specify this database administrator user in the DATABASE_ADMIN_USER parameter while installing BMC Helix Innovation Suite and applications. 

c

If Aurora replication is enabled, make sure that you use the endpoint or port of the Writer instance in the database host name.

3

Set up BMC Deployment Engine

 

Set up BMC Deployment Engine to call the relevant BMC Helix Innovation Suite installation pipelines that install the platform and applications.
Important: Configure the AWS credentials on the BMC Deployment Engine by using the aws configure configuration settings that you used to create the EKS cluster.

4

Install BMC Helix Platform services

a

Configure the Elasticsearch vm.max_map_count parameter to meet the virtual memory requirements for Elasticsearch installation through BMC Helix Platform services installation.

b

Install BMC Helix Platform services.

Important: Use the BMC Deployment Engine system as a controller instance to install BMC Helix Platform services.

5

Install BMC Helix Service Management

 

Install BMC Helix Innovation Suiteand applications. 

Installing and configuring Kubernetes NGINX Ingress Controller

The NGINX Ingress Controller is a load balancer for your cluster. Install and configure Kubernetes NGINX Ingress Controller by performing the following tasks:

  1. Install Kubernetes NGNIX Ingress Controller.
  2. Configure a DNS record for your domain.

To install Kubernetes NGINX Ingress Controller

  1. You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller. 
    To download the deploy.yaml file, run the any of the following commands based on the NGINX Ingress Controller you want to install:

    wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
    wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.3/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
    wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
    wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
  2. In the deploy.yaml file, modify the ConfigMap with the following details:
    1. Update the proxy-real-ip-cidr parameter with the value of virtual private cloud (VPC) Classless Inter-Domain Routing (CIDR) details that is used in the Kubernetes cluster, and update the listener port value.
      Example:

      apiVersion: v1
      data:
       allow-snippet-annotations: "true"
       enable-underscores-in-headers: "true"
       http-snippet: |
         server {
      listen 8080;
      server_tokens off;
      return 308 https://$host$request_uri;
          }
       proxy-body-size: 250m
       proxy-read-timeout: "3600"
       proxy-real-ip-cidr: xx.xxx.x.x/xx
       proxy-send-timeout: "3600"
       server-name-hash-bucket-size: "512"
       server-tokens: "false"
       ssl-redirect: "false"
       use-forwarded-headers: "true"
       use-proxy-protocol: "true"
      kind: ConfigMap
    2. Modify the AWS Certificate Manager (ACM) ID details as follows:

      metadata:
      annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:xxxxxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
    3. Under the Service object, remove the annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and add service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http as shown in the following example:

      kind: Service
      metadata:
        annotations:
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

       

  3. In your cluster, run the following command:

    kubectl apply -f deploy.yaml

    NGNIX Ingress Controller is deployed in the ingress-nginx namespace and an external classic load balancer with TLS termination is created in AWS.

  4. To get the address of the load balancer, run the following command:

    kubectl get svc -n ingress-nginx 

    Example command output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    ingress-nginx-controller LoadBalancer 10.100.180.188 xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com 80:31245/TCP,443:31285/TCP 6m4s
    ingress-nginx-controller-admission ClusterIP 10.100.182.96
  5. Make sure that the ingress-nginx-controller configmap file as the following parameters configured by using the following command:

    kubectl get cm ingress-nginx-controller -o yaml -n ingress-nginx

    Example:

    apiVersion: v1
    data:
     allow-snippet-annotations: "true"
     enable-underscores-in-headers: "true"
     http-snippet: |
       server {
    listen 8080;
    server_tokens off;
    return 308 https://$host$request_uri;
        }
     proxy-body-size: 250m
     proxy-read-timeout: "3600"
     proxy-real-ip-cidr: xx.xxx.x.x/xx
     proxy-send-timeout: "3600"
     server-name-hash-bucket-size: "512"
     server-tokens: "false"
     ssl-redirect: "false"
     use-forwarded-headers: "true"
     use-proxy-protocol: "true"
    kind: ConfigMap
    metadata:
     annotations:
       kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"v1","data":{"allow-snippet-annotations":"true","http-snippet":"server {\n  listen 2443;\n  return 308 https://$host$request_uri;\n}\n","proxy-real-ip-cidr":"xx.xxx.x.x/xx","use-forwarded-headers":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.ku
    bernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.7.0"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
     creationTimestamp: "2023-10-06T17:31:32Z"
     labels:
       app.kubernetes.io/component: controller
       app.kubernetes.io/instance: ingress-nginx
       app.kubernetes.io/name: ingress-nginx
       app.kubernetes.io/part-of: ingress-nginx
       app.kubernetes.io/version: 1.7.0
     name: ingress-nginx-controller
     namespace: ingress-nginx
     resourceVersion: "104167057"
     uid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  

    You can edit the config map by using the following command:

    kubectl edit cm ingress-nginx-controller -o yaml -n ingress-nginx

To configure a DNS record for your domain

Configure a DNS record for your domain so that you can access the applications by using URLs.

  1. Navigate to your domain-hosted zone.
  2. Create a DNS A type record for the domain to resolve URLs to the load balancer as shown in the following example:

    Record Name - *.helixonprem.com
    Type - A
    Value/Route traffic to
    - Alias to Application and Classic LoadBalancer
    - Select the region - us-east-2
    - Select the Classic LB - xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com

To configure the load balancer for the cluster

Configure the load balancer listener specifications by using the following steps:

  1. In the AWS console, select the load balancer created by Ingress Controller.
  2. Navigate to the Listeners tab.
  3. Make a note of the Instance Port value configured for the HTTPS listener.
  4. On the Listeners tab, click Edit.
  5. Update the first Listener values.
    1. Update the first Listener protocol value from HTTPS to SSL (Secure TCP).
    2. Update the Port value to 443 to the TCP Instance protocol or the Ingress service port.
    3. Make sure that instance port has the same value that you noted in step 3.
  1. Update the second Listener values.
    1. Update the second Listener protocol  to HTTP.
    2. Update the Port value to 80 to map to the HTTP protocol or Ingress service port.

To enable the proxy protocol in the load balancer

Enable the proxy protocol in the classic load balancer to forward X-Forwarded-* headers.

  1. Find the instance port value by using the following command:

    aws elb describe-load-balancers --load-balancer-name Load Balancer name

    Example command output:

    aws elb describe-load-balancers --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx "Policies": {
    "AppCookieStickinessPolicies": [],
    "LBCookieStickinessPolicies": [],
    "OtherPolicies": [
    "ProxyProtocol-policy-1",
    "ELBSecurityPolicy-2016-08"
    ]
    },
    "BackendServerDescriptions": [
    {
    "InstancePort": <Port value configured for the HTTPS listener>,
    "PolicyNames": [
    "ProxyProtocol-policy-1"
    ]
    }
  2. Create a policy that enables the proxy protocol.

    aws elb create-load-balancer-policy \
    --load-balancer-name Load Balancer name \
    --policy-name Proxy Protocol policy name \
    --policy-type-name Type of Proxy Protocol Policy \
    --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true

    aws elb set-load-balancer-policies-for-backend-server \
    --load-balancer-name  Load Balancer name \
    --instance-port Port number \
    --policy-names Proxy Protocol policy name

    Example:

    aws elb create-load-balancer-policy \
    --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
    --policy-name ProxyProtocol-policy-1 \
    --policy-type-name ProxyProtocolPolicyType \
    --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true

    aws elb set-load-balancer-policies-for-backend-server \
    --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
    --instance-port xxxxx \
    --policy-names ProxyProtocol-policy-1
  3. In the ingress-nginx namespace, in the ingress-nginx-controller configmap, enable the proxy protocol by using the following command:

    kubectl edit cm ingress-nginx-controller -o yaml -n ingress-nginx

    Example command output:

    apiVersion: v1
    data:
     allow-snippet-annotations: "true"
     enable-underscores-in-headers: "true"
     http-snippet: |
       server {
    listen 8080;
    server_tokens off;
    return 308 https://$host$request_uri;
        }
     proxy-body-size: 250m
     proxy-read-timeout: "3600"
     proxy-real-ip-cidr: xx.xxx.x.x/xx
     proxy-send-timeout: "3600"
     server-name-hash-bucket-size: "512"
     server-tokens: "false"
     ssl-redirect: "false"
     use-forwarded-headers: "true"
     use-proxy-protocol: "true"
    kind: ConfigMap

To configure the virtual memory parameter for Elasticsearch

For all worker nodes in your Amazon EKS cluster, set the sysctl -w vm.max_map_count parameter to 262144 before installing BMC Helix Platform services.

  1. In your Amazon EKS cluster, connect to the worker node through Secure Shell (SSH).
  2. Run the following commands on the worker node:

    sysctl -w vm.max_map_count=262144
    echo vm.max_map_count=262144 > /etc/sysctl.d/es-custom.conf

 

Example of setting up an Amazon EKS cluster

The following example shows the steps for setting up an Amazon EKS cluster by using the AWS Cloud Shell:

  1. Install kubectl.
    See Installing or updating kubectl in the AWS documentation.

  2. Install eksctl.
    See Installing or updating eksctl in the AWS documentation.

  3. Create an Amazon EKS cluster from the AWS Cloud shell.
    See Getting started with Amazon EKS – eksctl in the AWS documentation.

  4. Run the following eksctl command from the AWS Cloud Shell.

    eksctl create cluster \
    --name cluster name \
    --region region name \
    --version "1.xx" \
    --nodegroup-name node group name \
    --node-type type of node \
    --nodes-min minimum nodes count \
    --nodes-max maximum nodes count \
    --with-oidc \
    --ssh-access \
    --enable-ssm \
    --ssh-public-key eks-nodes \
    --asg-access \
    --external-dns-access \
    --alb-ingress-access \
    --managed

    After the eksctl create cluster command is complete, a kubectl configuration file located at /home/cloudshell-user/.kube/config is created.

  5. Copy the config file to your local system.
    Use this file while setting up BMC Deployment Engine.

 

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*