Deploying BMC Helix IT Operations Management in an Amazon Elastic Kubernetes Service cluster
Reference installation architecture
The following image shows the reference logical architecture used by BMC to install BMC Helix IT Operations Management in an EKS cluster:
The following AWS services are used:
- AWS Certificate Manager (ACM)—Handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys.
- Simple Storage Service (S3)—Is used to upload files to AWS. This is an object storage service and provides scalability, data availability, security, and performance.
- Route53— Is a highly available and scalable Domain Name System (DNS) web service.
Before you begin
- Make sure you have a domain and have configured DNS for the BMC Helix IT Operations Management applications so that you can access the applications by using URLs.
BMC has certified domain and DNS configuration created by using the Amazon Web Services (AWS) Route53 service. - Make sure that you create an SSL certificate so that BMC Helix IT Operations Management application URLs can support the HTTPS protocol.
BMC has certified wildcard SSL certificates with FQDN by using the AWS Certificate Manager (ACM) service. - System-requirements
- Downloading-the-deployment-manager.
- Setting-up-a-Harbor-registry-in-a-local-network-and-synchronizing-it-with-BMC-DTR
Process to install BMC Helix IT Operations Management in an EKS cluster
The following image provides an overview of the BMC Helix IT Operations Management installation in an EKS cluster:
The following table lists the tasks to install BMC Helix IT Operations Management in an EKS cluster:
To install and configure Kubernetes Ingress Ngnix Controller
The Ingress Controller is a load balancer for your cluster.
To install and configure Kubernetes Ingress Controller, perform the following tasks:
To create an Ingress Ngnix Controller instance
Install Helm 3.2.3 by using the following command:
curl -O https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz
tar -xzf helm-v3.2.3-linux-amd64.tar.gz
sudo cp ./linux-amd64/helm /usr/local/bin/Install Kubectl by using the following command:
curl -LO https://dl.k8s.io/release/v1.21.10/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
chmod +x kubectl
sudo mv ./kubectl /usr/bin- Ensure that the docker client is installed.
To install Kubernetes Ingress Nginx Controller
Install Kubernetes Ingress Nginx Controller 1.2.0 by using the deployment yaml .
Ingress Nginx Controller is deployed in the ingress-nginx namespace and an external classic load balancer with TLS termination is created in AWS.To get the TLS termination in the AWS load balancer (ELB), run the following command:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-1.2.0/deploy/static/provider/aws/deploy-tls-termination.yamlUpdate the virtual private cloud (VPC) Classless Inter-Domain Routing (CIDR) details by editing the deploy-tls-termination.yaml file as shown in the following example:
apiVersion: v1
data:
http-snippet: |
server {
listen 2443;
return 308 https://$host$request_uri;
}
proxy-real-ip-cidr: 192.168.0.0/16
use-forwarded-headers: "true"Update the AWS Certificate Manager (ACM) ID as shown in the following example:
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:xxxxxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxIn your cluster, run the following command:
kubectl apply -f deploy-tls-termination.yamlTo get the IP of the load balancer, run the following command:
kubectl get svc -n ingress-nginxExample command output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.180.188 xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com 80:32691/TCP,443:32317/TCP 6m4s
ingress-nginx-controller-admission ClusterIP 10.100.182.96
To configure a DNS record for your domain
Configure a DNS record for your domain so that you can access the applications by using URLs.
- Navigate to your domain-hosted zone.
Create a DNS A type record for the domain to resolve URLs to the load balancer as shown in the following example:
Record Name - *.helixonprem.com
Type - A
Value/Route traffic to
- Alias to Application and Classic LoadBalancer
- Select the region - us-east-2
- Select the Classic LB - xxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx.us-east-2.elb.amazonaws.com
To configure the load balancer for the cluster
Configure the load balancer listener specifications by using the following steps:
- In the AWS console, select the load balancer created by Ingress Controller.
- Navigate to the Listeners tab.
- Make a note of the Instance Port value configured for the HTTPS listener.
- On the Listeners tab, click Edit.
- Update the Load Balancer Protocol value from HTTPS to SSL (Secure TCP).
- Make sure that instance port has the same value that you noted in step 3.
To enable the proxy protocol in the load balancer
Enable the proxy protocol in the classic load balancer to forward X-Forwarded-* headers.
Find the instance port value by using the following command:
aws elb describe-load-balancers --load-balancer-name Load Balancer nameExample command output:
aws elb describe-load-balancers --load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx "Policies": {
"AppCookieStickinessPolicies": [],
"LBCookieStickinessPolicies": [],
"OtherPolicies": [
"ProxyProtocol-policy-1",
"ELBSecurityPolicy-2016-08"
]
},
"BackendServerDescriptions": [
{
"InstancePort": <Port value configured for the HTTPS listener>,
"PolicyNames": [
"ProxyProtocol-policy-1"
]
}Create a policy that enables the proxy protocol.
aws elb create-load-balancer-policy \
--load-balancer-name Load Balancer name \
--policy-name Proxy Protocol policy name \
--policy-type-name Type of Proxy Protocol Policy \
--policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name Load Balancer name \
--instance-port Port number \
--policy-names Proxy Protocol policy nameFor example:
aws elb create-load-balancer-policy \
--load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
--policy-name ProxyProtocol-policy-1 \
--policy-type-name ProxyProtocolPolicyType \
--policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name xxxxxxxxxxxxxxxxxxxxxxxxx\
--instance-port xxxxx \
--policy-names ProxyProtocol-policy-1In the ingress-nginx namespace, in the ingress-nginx-controller configmap, enable the proxy protocol by using the following command:
kubectl edit cm ingress-nginx-controller -o yaml -n ingress-nginxExample command output:
apiVersion: v1
data:
enable-underscores-in-headers: "true"
http-snippet: |
server {
listen 2443;
return 308 https://$host$request_uri;
}
proxy-real-ip-cidr: 192.168.0.0/16
server-name-hash-bucket-size: "1024"
use-forwarded-headers: "true"
use-proxy-protocol: "true"
kind: ConfigMap
To set up the NFS server
- Provision a CentOS virtual machine with the required disk space.
Run the following commands on the NFS server in the given order to set up the NFS:
sudo yum install -y nfs-util
sudo systemctl start nfs-server rpcbind
sudo systemctl enable nfs-server rpcbind
sudo mkdir /data1
sudo chmod 777 /data1/Run the following command to create the file /etc/exports with the content /data1 *(rw,sync,no_root_squash,insecure):
sudo vi /etc/exports
/data1 *(rw,sync,no_root_squash,insecure)Run the following command to export the mount:
sudo exportfs -ravVerify that the mount is accessible by running the following command:
showmount -e <NFS server IP address>Open the firewall access to the following ports for both TCP (transmission control protocol) and UDP (user datagram protocol) :
- 111
- 2049
- 20048
- 36779
- 39960
- 46209
- 48247
If the Kubernetes cluster has PSP(PodSecurityPolicy) enabled, and the default PSP is restricted, then you must provision the use of NFS volumes by creating a yaml file as shown in the following example:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: allow-nfs-volumes
spec:
privileged: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'nfs'
- 'secret'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:allow_nfs_clusterrole
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- allow-nfs-volumes # the psp we are giving access to
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp:allow_nfs_clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole # Must be Role or ClusterRole
name: psp:allow_nfs_clusterrole # The name of the ClusterRole to bind to
subjects:
- kind: ServiceAccount
name: nfs-subdir-external-provisioner
namespace: default # set correct namespace where nfs-subdir-external-provisioner will be installedRun the following command to apply the yaml file:
kubectl -f <path of yaml file>Run the following command to add the nfs-subdir-external-provisioner repository:
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/Run the following command to create the NFS provisioner:
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=<IP address of the NFS server> \
--set nfs.path=/data1 \
--set storageClass.name=<name of storage class>Example:
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.1.100 \
--set nfs.path=/data1 \
--set storageClass.name=nfs-storage-class