Installing BMC Helix Service Management in an Amazon Elastic Kubernetes Service cluster
Reference installation architecture
The following image shows the reference logical architecture used by BMC to install BMC Helix Service Management in an Amazon EKS cluster:
The following AWS services are used:
- AWS Certificate Manager (ACM)—Handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys.
- Simple Storage Service (S3)—Is used to upload files to AWS. This object storage service provides scalability, data availability, security, and performance.
- Route53— Is a highly available and scalable Domain Name System (DNS) web service.
Before you begin
Make sure you have a domain and have configured DNS for the BMC Helix Service Management applications so that you can access the applications by using URLs.
BMC certifies the use of the Amazon Web Services (AWS) Route53 service to create the domain and DNS configuration.
- Make sure that you create an SSL certificate so that BMC Helix Service Management application URLs can support the HTTPS protocol.
BMC certifies the use of the AWS Certificate Manager (ACM) service to create the wildcard SSL certificate. - Review the system requirements for BMC Helix Service Management installation.
- Download the installation files and container images access key from Electronic Product Download (EPD).
- Create your Harbor repository and synchronize the repository with BMC Docker Trusted Registry (DTR).
Process to install BMC Helix Service Management in an Amazon EKS cluster
The following image provides an overview of BMC Helix Service Management installation in an Amazon EKS cluster:
The following table lists the tasks to install BMC Helix Service Management in an Amazon EKS cluster:
Task | Action | Reference |
---|---|---|
1 | Create and set up an Amazon EKS cluster | |
a | Create a Kubernetes cluster by using the AWS EKS service. Important: BMC has certified using the default storage class gp3 from the Amazon Elastic Block Store (EBS) available in the Amazon EKS cluster. You can use the default storage class or create your own Amazon EBS storage class. Best Practice: We recommend that you use the AWS instance type m5 or m6 with CPU speed 3.5 GHz or higher. | Getting started with Amazon EKS in the AWS documentation How do I use persistent storage in Amazon EKS in the AWS documentation |
b | Enable the Kubernetes Metrics Server for Horizontal Pod Autoscaling feature. | Installing the Kubernetes Metrics Server in the AWS documentation |
c | Install and configure Kubernetes NGNIX Ingress Controller. | |
2 | Set up a database | |
a | Set up an external database for BMC Helix Innovation Suite. Important: Aurora PostgreSQL 13.x database in AWS is supported. BMC has certified using the Aurora PostgreSQL 13.3 database in AWS and the db.r6g.2xlarge instance class. If you use the Aurora PostgreSQL 13.3 database, the preferred Instance class for compact size deployment is db.r6g.2xlarge | Amazon Aurora supports PostgreSQL 13 in the AWS documentation
|
b | Create a database administrator user and specify the following permissions for the user: Important: Make sure that you specify this database administrator user in the DATABASE_ADMIN_USER parameter while installing BMC Helix Innovation Suite and applications. | |
c | If Aurora replication is enabled, make sure that you use the endpoint or port of the Writer instance in the database host name. | |
3 | Set up BMC Deployment Engine | |
| Set up BMC Deployment Engine to call the relevant BMC Helix Innovation Suite installation pipelines that install the platform and applications. | |
4 | Install BMC Helix Platform services | |
a | Configure the Elasticsearch vm.max_map_count parameter to meet the virtual memory requirements for Elasticsearch installation through BMC Helix Platform services installation. | |
b | Install BMC Helix Platform services. Important: Use the BMC Deployment Engine system as a controller instance to install BMC Helix Platform services. | |
5 | Install BMC Helix Service Management | |
| Install BMC Helix Innovation Suiteand applications. |
Installing and configuring Kubernetes NGINX Ingress Controller
The NGINX Ingress Controller is a load balancer for your cluster. Install and configure Kubernetes NGINX Ingress Controller by performing the following tasks:
To install Kubernetes NGINX Ingress Controller
You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller.
To download the deploy.yaml file, run the any of the following commands based on the NGINX Ingress Controller you want to install:wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.3/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml- In the deploy.yaml file, modify the ConfigMap with the following details:
Update the proxy-real-ip-cidr parameter with the value of virtual private cloud (VPC) Classless Inter-Domain Routing (CIDR) details that is used in the Kubernetes cluster, and update the listener port value.
Example:apiVersion: v1
data:
allow-snippet-annotations: "true"
enable-underscores-in-headers: "true"
http-snippet: |
server {
listen 8080;
server_tokens off;
return 308 https://$host$request_uri;
}
proxy-body-size: 250m
proxy-read-timeout: "3600"
proxy-real-ip-cidr: xx.xxx.x.x/xx
proxy-send-timeout: "3600"
server-name-hash-bucket-size: "512"
server-tokens: "false"
ssl-redirect: "false"
use-forwarded-headers: "true"
use-proxy-protocol: "true"
kind: ConfigMapModify the AWS Certificate Manager (ACM) ID details as follows:
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:xxxxxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxUnder the Service object, remove the annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and add service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http as shown in the following example:
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
In your cluster, run the following command:
kubectl apply -f deploy.yamlNGNIX Ingress Controller is deployed in the ingress-nginx namespace and an external classic load balancer with TLS termination is created in AWS.
To get the address of the load balancer, run the following command:
kubectl get svc -n ingress-nginxExample command output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.180.188 xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com 80:31245/TCP,443:31285/TCP 6m4s
ingress-nginx-controller-admission ClusterIP 10.100.182.96Make sure that the ingress-nginx-controller configmap file as the following parameters configured by using the following command:
kubectl get cm ingress-nginx-controller -o yaml -n ingress-nginxExample:
apiVersion: v1
data:
allow-snippet-annotations: "true"
enable-underscores-in-headers: "true"
http-snippet: |
server {
listen 8080;
server_tokens off;
return 308 https://$host$request_uri;
}
proxy-body-size: 250m
proxy-read-timeout: "3600"
proxy-real-ip-cidr: xx.xxx.x.x/xx
proxy-send-timeout: "3600"
server-name-hash-bucket-size: "512"
server-tokens: "false"
ssl-redirect: "false"
use-forwarded-headers: "true"
use-proxy-protocol: "true"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"allow-snippet-annotations":"true","http-snippet":"server {\n listen 2443;\n return 308 https://$host$request_uri;\n}\n","proxy-real-ip-cidr":"xx.xxx.x.x/xx","use-forwarded-headers":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.ku
bernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.7.0"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
creationTimestamp: "2023-10-06T17:31:32Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.0
name: ingress-nginx-controller
namespace: ingress-nginx
resourceVersion: "104167057"
uid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxYou can edit the config map by using the following command:
kubectl edit cm ingress-nginx-controller -o yaml -n ingress-nginx
To configure a DNS record for your domain
Configure a DNS record for your domain so that you can access the applications by using URLs.
- Navigate to your domain-hosted zone.
Create a DNS A type record for the domain to resolve URLs to the load balancer as shown in the following example:
Record Name - *.helixonprem.com
Type - A
Value/Route traffic to
- Alias to Application and Classic LoadBalancer
- Select the region - us-east-2
- Select the Classic LB - xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com
To configure the load balancer for the cluster
Configure the load balancer listener specifications by using the following steps:
- In the AWS console, select the load balancer created by Ingress Controller.
- Navigate to the Listeners tab.
- Make a note of the Instance Port value configured for the HTTPS listener.
- On the Listeners tab, click Edit.
- Update the first Listener values.
- Update the first Listener protocol value from HTTPS to SSL (Secure TCP).
- Update the Port value to 443 to the TCP Instance protocol or the Ingress service port.
- Make sure that instance port has the same value that you noted in step 3.
- Update the second Listener values.
- Update the second Listener protocol to HTTP.
- Update the Port value to 80 to map to the HTTP protocol or Ingress service port.
To enable the proxy protocol in the load balancer
Enable the proxy protocol in the classic load balancer to forward X-Forwarded-* headers.
Find the instance port value by using the following command:
aws elb describe-load-balancers --load-balancer-name Load Balancer nameExample command output:
aws elb describe-load-balancers --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx "Policies": {
"AppCookieStickinessPolicies": [],
"LBCookieStickinessPolicies": [],
"OtherPolicies": [
"ProxyProtocol-policy-1",
"ELBSecurityPolicy-2016-08"
]
},
"BackendServerDescriptions": [
{
"InstancePort": <Port value configured for the HTTPS listener>,
"PolicyNames": [
"ProxyProtocol-policy-1"
]
}Create a policy that enables the proxy protocol.
aws elb create-load-balancer-policy \
--load-balancer-name Load Balancer name \
--policy-name Proxy Protocol policy name \
--policy-type-name Type of Proxy Protocol Policy \
--policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name Load Balancer name \
--instance-port Port number \
--policy-names Proxy Protocol policy nameExample:
aws elb create-load-balancer-policy \
--load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
--policy-name ProxyProtocol-policy-1 \
--policy-type-name ProxyProtocolPolicyType \
--policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
--instance-port xxxxx \
--policy-names ProxyProtocol-policy-1In the ingress-nginx namespace, in the ingress-nginx-controller configmap, enable the proxy protocol by using the following command:
kubectl edit cm ingress-nginx-controller -o yaml -n ingress-nginxExample command output:
apiVersion: v1
data:
allow-snippet-annotations: "true"
enable-underscores-in-headers: "true"
http-snippet: |
server {
listen 8080;
server_tokens off;
return 308 https://$host$request_uri;
}
proxy-body-size: 250m
proxy-read-timeout: "3600"
proxy-real-ip-cidr: xx.xxx.x.x/xx
proxy-send-timeout: "3600"
server-name-hash-bucket-size: "512"
server-tokens: "false"
ssl-redirect: "false"
use-forwarded-headers: "true"
use-proxy-protocol: "true"
kind: ConfigMap
To configure the virtual memory parameter for Elasticsearch
For all worker nodes in your Amazon EKS cluster, set the sysctl -w vm.max_map_count parameter to 262144 before installing BMC Helix Platform services.
- In your Amazon EKS cluster, connect to the worker node through Secure Shell (SSH).
Run the following commands on the worker node:
sysctl -w vm.max_map_count=262144
echo vm.max_map_count=262144 > /etc/sysctl.d/es-custom.conf
Example of setting up an Amazon EKS cluster
The following example shows the steps for setting up an Amazon EKS cluster by using the AWS Cloud Shell:
Install kubectl.
See Installing or updating kubectl in the AWS documentation.Install eksctl.
See Installing or updating eksctl in the AWS documentation.Create an Amazon EKS cluster from the AWS Cloud shell.
See Getting started with Amazon EKS – eksctl in the AWS documentation.Run the following eksctl command from the AWS Cloud Shell.
eksctl create cluster \
--name cluster name \
--region region name \
--version "1.xx" \
--nodegroup-name node group name \
--node-type type of node \
--nodes-min minimum nodes count \
--nodes-max maximum nodes count \
--with-oidc \
--ssh-access \
--enable-ssm \
--ssh-public-key eks-nodes \
--asg-access \
--external-dns-access \
--alb-ingress-access \
--managedAfter the eksctl create cluster command is complete, a kubectl configuration file located at /home/cloudshell-user/.kube/config is created.
- Copy the config file to your local system.
Use this file while setting up BMC Deployment Engine.