Deploying BMC Helix IT Operations Management in a Google Kubernetes Engine cluster
You can install BMC Helix IT Operations Management in a Google Kubernetes Engine (GKE) cluster if you want to use GKE to manage your Kubernetes platform.
Reference Installation architecture
The following image shows the reference logical architecture used by BMC to install BMC Helix IT Operations Management in GKE cluster:

Before you begin
- Make sure you have a domain and have configured DNS for the BMC Helix IT Operations Management applications so that you can access the applications by using URLs.
BMC has certified domain and DNS configuration created by using the LetsEncrypt service. - Make sure that you create an SSL certificate so that BMC Helix IT Operations Management application URLs can support the HTTPS protocol.
BMC has certified wildcard SSL certificates with FQDN by using the AWS Certificate Manager (ACM) service. - System requirements
- Downloading the deployment manager.
- Setting up a Harbor registry in a local network and synchronizing it with BMC DTR.
Process to install BMC Helix IT Operations Management in a GKE cluster
The following image provides an overview of the BMC Helix IT Operations Management installation in a GKE cluster:

The following table lists the tasks to install BMC Helix IT Operations Management in a GKE cluster:
| Task | Action | Reference |
|---|---|---|
| 1 | Create and set up a GKE cluster | |
| a | Create a Kubernetes cluster by using the GKE service. Important: BMC has certified using the Google Cloud Platform (GCP) Persistent Disk based default storage class standard-rwo available in the GKE Cluster. You can use the default storage class or create your own storage class. | Creating a private cluster in the Google Cloud documentation Persistent volumes and dynamic provisioning in the Google Cloud documentation |
| b | Create a Google Cloud Platform (GCP) virtual machine instance to function as the controller instance for BMC Helix common services installation. Important: Select the same region and zone as you specified in the GKE cluster. | Creating and starting a VM instance in the Google Cloud documentation |
| c | Create a Network Address Translation (NAT) gateway to enable traffic to your private GKE network. | Set up and manage network address translation with Cloud NAT in the Google Cloud documentation |
| d | Create a cloud storage bucket to store installer files. | Create storage buckets in the Google Cloud documentation |
| e | Install and configure Kubernetes Ingress Nginx Controller. To know the certified versions of NGINX Ingress Controller with the Kubernetes and OpenShift orchestration platform see, Deploying and configuring the ingress controller for OpenShift or Kubernetes | To install Kubernetes Ingress Controller |
| 2 | Set up the NFS server | |
| Set up the NFS server for BMC Helix Operations Management and BMC Helix Continuous Optimization. | To set up the NFS server | |
| 3 | Set up the BMC Discovery. | |
| Set up BMC Discovery for BMC Helix Operations Management and BMC Helix Continuous Optimization. | Setting up BMC Discovery | |
| 4 | Install BMC Helix IT Operations Management | |
| Install BMC Helix IT Operations Management platform and applications. | Deploying BMC Helix IT Operations Management |
To install and configure Kubernetes Ingress Nginx Controller
The Ingress Controller is a load balancer for your cluster.
To install and configure Ingress Controller, perform the following tasks:
- Install Kubernetes Ingress Nginx Controller.
- Create a secret and configure Ingress Controller.
- Update the Ingress ConfigMap.
To install and configure Kubernetes Ingress Nginx Controller
- You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller.
Based on the version of your Kubernetes, run one of the following commands to get the deploy.yaml file for the NGINX Ingress Controller:
The certified versions of NGINX Ingress Controller with the Kubernetes and OpenShift orchestration platform are as follows:Nginx Ingress Controller version
Supported Kubernetes version
OpenShift version
1.14.3 1.31, 1.32, 1.33, 1.34, and 1.35
4.18, 4.19, and 4.20 1.13.7 1.29 and 1.30 4.16 and 4.17
Ingress Controller is deployed in the ingress-nginx namespace and an external load balancer is created. A network load balancer is provisioned in GCP by the Ingress Controller.
- To view the network load balancer, in the Google Cloud Console, navigate to the LOAD BALANCERS console.
To create a secret and configure Ingress Controller
For secure connections to a server, create a secret, and then add the certificate in Ingress Controller:
Create a secret from the trusted certificate and key by using the following command
kubectl create secret tls my-tls-secret --cert=/path/to/cert.pem --key=/path/to/privkey.pem -n defaultIn the Ingress Controller, in the args section, set the default certificate to my-tls-secret as shown in the following example
--default-ssl-certificate=ingress-nginx/my-tls-secretSet the ingress class value.
For example:--ingress-class=knginx- Export the certificate and store the intermediate certificate R3(r3-intermediate.cer) as a base 64 encoded X.509 .cer file.
- Edit the certificate file, remove the new line, and save the file with a single line as the intermediate certificate.
- During the BMC Helix Platform services deployment, in the infra.config file, in the CLIENT_ROOT_CERT parameter, add the intermediate certificate.
To Update the Ingress ConfigMap
Customize the NGINX configuration by updating the Ingress ConfigMap.
Edit the Ingress ConfigMap by using the following command:
kubectl edit cm -n ingress-nginx ingress-nginx-controller
Specify the following parameter values as shown in the example:
data: enable-underscores-in-headers: "true" annotations-risk-level: Critical (only applicable for Ingress version 1.12.1) proxy-body-size: 250m server-name-hash-bucket-size: "1024" ssl-redirect: "false" use-forwarded-headers: "true"
To set up the NFS server
- Provision a CentOS, Linux, RHEL, or Ubuntu virtual machine with the required disk space.
Run the following commands in the given order to set up the NFS:
sudo yum install -y nfs-util
sudo systemctl start nfs-server rpcbind
sudo systemctl enable nfs-server rpcbind
sudo mkdir /data1
sudo chmod 777 /data1/
sudo vi /etc/exports
/data1 *(rw,sync,no_root_squash,insecure)
sudo exportfs -ravVerify that the mount is accessible by using the following command:
showmount -e <mount IP address>Open the firewall access to the following ports:
- tcp:111
- tcp:2049
- tcp:20048
- tcp:36779
- tcp:39960
- tcp:46209
- tcp:48247
Use the following command as an example:
gcloud compute firewall-rules create nfs \
--allow=tcp:111,udp:111,tcp:2049,udp:2049 --target-tags=nfs- Create a PSP and Kubernetes provisioner.
- Configure the NFS folder and give permissions that are detailed in Example configuration files for BMC Helix Continuous Optimization.
Where to go from here
Performing the post deployment procedures
Example of setting up a GKE cluster
The following example shows the procedure to set up an GKE cluster by using the Google Cloud Console:
In the Google Cloud console, navigate to the Google Kubernetes Engine page, and click Create.
See Create a zonal cluster by using the Google Cloud console in Google Cloud documentation.- Complete the Cluster basics section as shown in the following image:

- Complete the Networking section as shown in the following image:

- Complete the Features section as shown in the following image:

After the GKE cluster is provisioned, scale the cluster by adding node pools.
See Add and manage node pools in Google Cloud documentation.