Deploying BMC Helix IT Operations Management in a Google Kubernetes Engine cluster
macro:confluence_layout-cell
macro:idReference Installation architecture
The following image shows the reference logical architecture used by BMC to install BMC Helix IT Operations Management in GKE cluster:
macro:confluence_layout-cell

Before you begin
- Make sure you have a domain and have configured DNS for the BMC Helix IT Operations Management applications so that you can access the applications by using URLs.
BMC has certified domain and DNS configuration created by using the LetsEncrypt service. - Make sure that you create an SSL certificate so that BMC Helix IT Operations Management application URLs can support the HTTPS protocol.
BMC has certified wildcard SSL certificates with FQDN by using the AWS Certificate Manager (ACM) service. - System requirements
- Downloading the deployment manager.
macro:confluence_layout-cell
macro:idProcess to install BMC Helix IT Operations Management in a GKE cluster
The following image provides an overview of the BMC Helix IT Operations Management installation in a GKE cluster:

The following table lists the tasks to install BMC Helix IT Operations Management in a GKE cluster:
Task | Action | Reference |
|---|---|---|
1 | Create and set up a GKE cluster |
|
a | Create a Kubernetes cluster by using the GKE service. Important: BMC has certified using the Google Cloud Platform (GCP) Persistent Disk based default storage class standard-rwo available in the GKE Cluster. You can use the default storage class or create your own storage class. | Creating a private cluster in the Google Cloud documentation Persistent volumes and dynamic provisioning in the Google Cloud documentation |
b | Create a Google Cloud Platform (GCP) virtual machine instance to function as the controller instance for BMC Helix common services installation. Important: Select the same region and zone as you specified in the GKE cluster. | Creating and starting a VM instance in the Google Cloud documentation |
c | Create a Network Address Translation (NAT) gateway to enable traffic to your private GKE network. | Set up and manage network address translation with Cloud NAT in the Google Cloud documentation |
d | Create a cloud storage bucket to store installer files. | Create storage buckets in the Google Cloud documentation |
e | Install and configure Kubernetes Ingress Nginx Controller. | |
2 | Set up the NFS server |
|
| Set up the NFS server for BMC Helix Operations Management and BMC Helix Continuous Optimization. | |
3 | Set up the BMC Discovery. |
|
| Set up BMC Discovery for BMC Helix Operations Management and BMC Helix Continuous Optimization. | |
5 | Install BMC Helix IT Operations Management |
|
| Install BMC Helix IT Operations Management platform and applications. |
macro:confluence_layout-cell
macro:idTo install and configure Kubernetes Ingress Nginx Controller
The Ingress Controller is a load balancer for your cluster.
To install and configure Ingress Controller, perform the following tasks:
macro:confluence_layout-cell
macro:id
To install and configure Kubernetes Ingress Nginx Controller
You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller.
Based on the version of your Kubernetes, run one of the following commands to get the deploy.yaml file for the NGINX Ingress Controller:
The certified versions of NGINX Ingress Controller with the Kubernetes and OpenShift orchestration platform are as follows:Nginx Ingress Controller version
Supported Kubernetes version
OpenShift version
1.7.0 and 1.11.5
1.25
4.12
1.8.1 and 1.11.5
1.25 and 1.26
4.13
1.9.6 and 1.11.5
1.26 and 1.27
4.13, 4.14, and 4.15
1.10.1 and 1.11.5
1.27 and 1.28
4.14 and 4.15
1.11.1 and 1.11.5
1.28, 1.29, and 1.30
4.15
1.11.2 and 1.11.5
1.30
4.15 and 4.16
1.11.4 and 1.11.5
1.31
4.17
1.12.1 1.32 4.18 1.12.4 1.33 4.19 1.14.0, and 1.13.x 1.34 4.19 Ingress Controller is deployed in the ingress-nginx namespace and an external load balancer is created. A network load balancer is provisioned in GCP by the Ingress Controller.
- To view the network load balancer, in the Google Cloud Console, navigate to the LOAD BALANCERS console.
macro:confluence_layout-cell
macro:id
To create a secret and configure Ingress Controller
For secure connections to a server, create a secret, and then add the certificate in Ingress Controller:
Create a secret from the trusted certificate and key by using the following command:
kubectl create secret tls my-tls-secret --cert=/path/to/cert.pem --key=/path/to/privkey.pem -n default
In the Ingress Controller, in the args section, set the default certificate to my-tls-secret as shown in the following example:
--default-ssl-certificate=ingress-nginx/my-tls-secret
Set the ingress class value.
For example:--ingress-class=knginx
- Export the certificate and store the intermediate certificate R3(r3-intermediate.cer) as a base 64 encoded X.509 .cer file.
- Edit the certificate file, remove the new line, and save the file with a single line as the intermediate certificate.
- During the BMC Helix Platform services deployment, in the infra.config file, in the CLIENT_ROOT_CERT parameter, add the intermediate certificate.
macro:confluence_layout-cell
To Update the Ingress ConfigMap
Customize the NGINX configuration by updating the Ingress ConfigMap.
Edit the Ingress ConfigMap by using the following command:
kubectl edit cm -n ingress-nginx ingress-nginx-controller
Specify the following parameter values as shown in the example:
data: enable-underscores-in-headers: "true" annotations-risk-level: Critical (only applicable for Ingress version 1.12.1) proxy-body-size: 250m server-name-hash-bucket-size: "1024" ssl-redirect: "false" use-forwarded-headers: "true"
macro:idTo set up the NFS server
- Provision a CentOS, Linux, RHEL, or Ubuntu virtual machine with the required disk space.
Run the following commands in the given order to set up the NFS:
sudo yum install -y nfs-util
sudo systemctl start nfs-server rpcbind
sudo systemctl enable nfs-server rpcbind
sudo mkdir /data1
sudo chmod 777 /data1/
sudo vi /etc/exports
/data1 *(rw,sync,no_root_squash,insecure)
sudo exportfs -ravVerify that the mount is accessible by using the following command:
showmount -e <mount IP address>
Open the firewall access to the following ports:
- tcp:111
- tcp:2049
- tcp:20048
- tcp:36779
- tcp:39960
- tcp:46209
- tcp:48247
Use the following command as an example:
gcloud compute firewall-rules create nfs \
--allow=tcp:111,udp:111,tcp:2049,udp:2049 --target-tags=nfs- Create a PSP and Kubernetes provisioner.
- Configure the NFS folder and give permissions that are detailed in Example configuration files for BMC Helix Continuous Optimization.
macro:confluence_layout-cell
macro:idWhere to go from here
macro:confluence_layout-cell
Example of setting up a GKE cluster
The following example shows the procedure to set up an GKE cluster by using the Google Cloud Console:
In the Google Cloud console, navigate to the Google Kubernetes Engine page, and click Create.
See Create a zonal cluster by using the Google Cloud console in Google Cloud documentation.- Complete the Cluster basics section as shown in the following image:

- Complete the Networking section as shown in the following image:

- Complete the Features section as shown in the following image:

After the GKE cluster is provisioned, scale the cluster by adding node pools.
See Add and manage node pools in Google Cloud documentation.
macro:confluence_layout-cell