Deploying BMC Helix IT Operations Management in a Google Kubernetes Engine cluster


You can install BMC Helix IT Operations Management in a Google Kubernetes Engine  (GKE) cluster if you want to use GKE to manage your Kubernetes platform.

Warning

Important

BMC provides general guidelines to install BMC Helix IT Operations Management in a Google Kubernetes Engine cluster based on the following reference architecture used by BMC. You can choose any alternative architecture or installation options on this platform. However, BMC does not provide support for alternative options.

macro:confluence_layout-cell

 

macro:idReference Installation architecture

The following image shows the reference logical architecture used by BMC to install BMC Helix IT Operations Management in GKE cluster:

macro:confluence_layout-cell

image-2024-1-24_15-29-31-1.png

Before you begin

  • Make sure you have a domain and have configured DNS for the BMC Helix IT Operations Management applications so that you can access the applications by using URLs.
    BMC has certified domain and DNS configuration created by using the LetsEncrypt service.
  • Make sure that you create an SSL certificate so that BMC Helix IT Operations Management application URLs can support the HTTPS protocol.
    BMC has certified wildcard SSL certificates with FQDN by using the AWS Certificate Manager (ACM) service.
  • System requirements
  • Downloading the deployment manager.
  • Setting up a Harbor registry in a local network and synchronizing it with BMC DTR

    Warning

    Important

    Google Cloud Artifact Registry is not supported for BMC Helix IT Operations Management installation.

macro:confluence_layout-cell

 

macro:idProcess to install BMC Helix IT Operations Management in a GKE cluster

The following image provides an overview of the BMC Helix IT Operations Management installation in a GKE cluster:

image-2023-8-22_15-41-5-1.png

The following table lists the tasks to install BMC Helix IT Operations Management in a GKE cluster:

 

Task

Action

Reference


1

Create and set up a GKE cluster

 

a

Create a Kubernetes cluster by using the GKE service.

Important: BMC has certified using the Google Cloud Platform (GCP) Persistent Disk based default storage class standard-rwo available in the GKE Cluster. You can use the default storage class or create your own storage class.

b

Create a Google Cloud Platform (GCP) virtual machine instance to function as the controller instance for BMC Helix common services installation.

Important: Select the same region and zone as you specified in the GKE cluster.

Creating and starting a VM instance in the Google Cloud documentation

c

Create a Network Address Translation (NAT) gateway to enable traffic to your private GKE network.

d

Create a cloud storage bucket to store installer files.

Create storage buckets in the Google Cloud documentation

e

Install and configure Kubernetes Ingress Nginx Controller.

To know the certified versions of NGINX Ingress Controller with the Kubernetes and OpenShift orchestration platform see, Deploying and configuring the ingress controller for OpenShift or Kubernetes

2

Set up the NFS server

 

 

Set up the NFS server for BMC Helix Operations Management and BMC Helix Continuous Optimization.

3

Set up the BMC Discovery.

 

 

Set up BMC Discovery for BMC Helix Operations Management and BMC Helix Continuous Optimization.

5

Install BMC Helix IT Operations Management

 

 

Install BMC Helix IT Operations Management platform and applications. 

page 1

 

Data URI image

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

macro:idData URI imageTo install and configure Kubernetes Ingress Nginx Controller

The Ingress Controller is a load balancer for your cluster

To install and configure Ingress Controller, perform the following tasks:

  1. Install Kubernetes Ingress Nginx Controller.
  2. Create a secret and configure Ingress Controller.
  3. Update the Ingress ConfigMap.

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

macro:id

 

Data URI image

To install and configure Kubernetes Ingress Nginx Controller

  1.  You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller.
    Based on the version of your Kubernetes, run one of the following commands to get the 
    deploy.yaml file for the NGINX Ingress Controller:
    The certified versions of NGINX Ingress Controller with the Kubernetes and OpenShift orchestration platform are as follows:Data URI image

    Nginx Ingress Controller version

    Supported Kubernetes version

    OpenShift version

    1.7.0 and 1.11.5

    1.25  

    4.12

    1.8.1 and 1.11.5

    1.25 and 1.26

    4.13

    1.9.6 and 1.11.5

    1.26 and 1.27

    4.13, 4.14, and 4.15

    1.10.1 and 1.11.5

    1.27 and 1.28

    4.14 and 4.15

    1.11.1 and 1.11.5

    1.28, 1.29, and 1.30

    4.15

    1.11.2 and 1.11.5

    1.30

    4.15 and 4.16

    1.11.4 and 1.11.5

    1.31

    4.17

    1.12.11.324.18
    1.12.41.334.19
    1.14.0, and 1.13.x1.344.19

    Ingress Controller is deployed in the ingress-nginx namespace and an external load balancer is created. A network load balancer is provisioned in GCP by the Ingress Controller.

  2. To view the network load balancer, in the Google Cloud Console, navigate to the LOAD BALANCERS console.

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

macro:id

 

Data URI image

To create a secret and configure Ingress Controller

For secure connections to a server, create a secret, and then add the certificate in Ingress Controller:

  1. Create a secret from the trusted certificate and key by using the following command:

    kubectl create secret tls my-tls-secret --cert=/path/to/cert.pem --key=/path/to/privkey.pem -n default

  2. In the Ingress Controller, in the args section, set the default certificate to my-tls-secret as shown in the following example:

    --default-ssl-certificate=ingress-nginx/my-tls-secret

  3. Set the ingress class value.
    For example:

    --ingress-class=knginx

    Warning

    Important

    Make sure that you set the same ingress class value in the configs/infra.config file during the BMC Helix Platform services deployment.

    The ingress class value is used by the INGRESS_CLASS parameter in the HELIX_ONPREM_DEPLOYMENT pipeline during BMC Helix Service Management installation.

  4. Export the certificate and store the intermediate certificate R3(r3-intermediate.cer) as a base 64 encoded X.509 .cer file.
  5. Edit the certificate file, remove the new line, and save the file with a single line as the intermediate certificate.
  6. During the BMC Helix Platform services deployment, in the infra.config file, in the CLIENT_ROOT_CERT parameter, add the intermediate certificate.

Data URI image

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

To Update the Ingress ConfigMap

Customize the NGINX configuration by updating the Ingress ConfigMap.

  1. Edit the Ingress ConfigMap by using the following command:

    kubectl edit  cm -n ingress-nginx ingress-nginx-controller
  2. Specify the following parameter values as shown in the example:

    data:
    
      enable-underscores-in-headers: "true"
      
      annotations-risk-level: Critical (only applicable for Ingress version 1.12.1)
      
      proxy-body-size: 250m
    
      server-name-hash-bucket-size: "1024"
    
      ssl-redirect: "false"
    
      use-forwarded-headers: "true"

 

Data URI image

macro:idData URI imageTo set up the NFS server

  1. Provision a CentOS, Linux, RHEL, or Ubuntu virtual machine with the required disk space. 
  2. Run the following commands in the given order to set up the NFS:

    sudo yum install -y nfs-util

    sudo systemctl start nfs-server rpcbind

    sudo systemctl enable nfs-server rpcbind

    sudo mkdir /data1

    sudo chmod 777 /data1/

    sudo vi /etc/exports

    /data1 *(rw,sync,no_root_squash,insecure)

    sudo exportfs -rav

    Data URI image

  3. Verify that the mount is accessible by using the following command:

    showmount -e <mount IP address>

    Data URI image

  4. Open the firewall access to the following ports:

    • tcp:111
    • tcp:2049
    • tcp:20048
    • tcp:36779
    • tcp:39960
    • tcp:46209
    • tcp:48247

    Data URI image

    Use the following command as an example:

     gcloud compute firewall-rules create nfs \
    --allow=tcp:111,udp:111,tcp:2049,udp:2049 --target-tags=nfs

    Data URI image

  5. Create a PSP and Kubernetes provisioner. 
  6. Configure the NFS folder and give permissions that are detailed in Example configuration files for BMC Helix Continuous Optimization.

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

macro:idData URI imageWhere to go from here

Performing the post deployment procedures

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

Example of setting up a GKE cluster

The following example shows the procedure to set up an GKE cluster by using the Google Cloud Console:

  1. In the Google Cloud console, navigate to the Google Kubernetes Engine page, and click Create.
    See Create a zonal cluster by using the Google Cloud console in Google Cloud documentation.

  2. Complete the Cluster basics section as shown in the following image:
    Create cluster section.png
  3. Complete the Networking section as shown in the following image:
    Networking section.png
  4. Complete the Features section as shown in the following image:
    Features section.png
  5. After the GKE cluster is provisioned, scale the cluster by adding node pools.
    See Add and manage node pools in Google Cloud documentation.

Data URI image

Data URI image

Data URI image

macro:confluence_layout-cell

 

Data URI image

Data URI image

Data URI image

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC Helix IT Operations Management deployment 25.4