Deploying BMC Helix IT Operations Management in a Google Kubernetes Engine cluster


You can install BMC Helix IT Operations Management in a Google Kubernetes Engine  (GKE) cluster if you want to use GKE to manage your Kubernetes platform.

Important

BMC provides general guidelines to install BMC Helix IT Operations Management in a Google Kubernetes Engine cluster based on the following reference architecture used by BMC. You can choose any alternative architecture or installation options on this platform. However, BMC does not provide support for alternative options.

 

Reference Installation architecture

The following image shows the reference logical architecture used by BMC to install BMC Helix IT Operations Management in GKE cluster:

image-2024-1-24_15-29-31-1.png

Before you begin

  • Make sure you have a domain and have configured DNS for the BMC Helix IT Operations Management applications so that you can access the applications by using URLs.
    BMC has certified domain and DNS configuration created by using the LetsEncrypt service.
  • Make sure that you create an SSL certificate so that BMC Helix IT Operations Management application URLs can support the HTTPS protocol.
    BMC has certified wildcard SSL certificates with FQDN by using the AWS Certificate Manager (ACM) service.
  • System-requirements
  • Downloading-the-deployment-manager.
  • Setting-up-a-Harbor-registry-in-a-local-network-and-synchronizing-it-with-BMC-DTR

    Important

    Google Cloud Artifact Registry is not supported for BMC Helix IT Operations Management installation.

 

Process to install BMC Helix IT Operations Management in a GKE cluster

The following image provides an overview of the BMC Helix IT Operations Management installation in a GKE cluster:

image-2023-8-22_15-41-5-1.png

The following table lists the tasks to install BMC Helix IT Operations Management in a GKE cluster:

 

To install and configure Kubernetes Ingress Nginx Controller

The Ingress Controller is a load balancer for your cluster

To install and configure Ingress Controller, perform the following tasks:

  1. Install Kubernetes Ingress Nginx Controller.
  2. Create a secret and configure Ingress Controller.
  3. Update the Ingress ConfigMap.

 

To install and configure Kubernetes Ingress Nginx Controller

  1.  You need the deploy.yaml file to install Kubernetes NGINX Ingress Controller.
    Based on the version of your Kubernetes, run one of the following commands to get the 
    deploy.yaml file for the NGINX Ingress Controller:
    The certified versions of NGINX Ingress Controller with the Kubernetes and OpenShift orchestration platform are as follows:

    Nginx Ingress Controller version
    Supported Kubernetes version
    OpenShift version
    1.7.0 and 1.11.5
    1.25  
    4.12
    1.8.1 and 1.11.5
    1.25 and 1.26
    4.13
    1.9.6 and 1.11.5
    1.26 and 1.27
    4.13, 4.14, and 4.15
    1.10.1 and 1.11.5
    1.27 and 1.28
    4.14 and 4.15
    1.11.1 and 1.11.5
    1.28, 1.29, and 1.30
    4.15
    1.11.2 and 1.11.5
    1.30
    4.15 and 4.16
    1.11.4 and 1.11.5
    1.31
    4.17

    Nginx Ingress Controller version

    Supported Kubernetes version

    OpenShift version

    1.7.0 and 1.11.5

    1.25  

    4.12

    1.8.1 and 1.11.5

    1.25 and 1.26

    4.13

    1.9.6 and 1.11.5

    1.26 and 1.27

    4.13, 4.14, and 4.15

    1.10.1 and 1.11.5

    1.27 and 1.28

    4.14 and 4.15

    1.11.1 and 1.11.5

    1.28, 1.29, and 1.30

    4.15

    1.11.2 and 1.11.5

    1.30

    4.15 and 4.16

    1.11.4 and 1.11.5

    1.31

    4.17

    Nginx Ingress Controller version

    Supported Kubernetes version

    OpenShift version

    1.7.0 and 1.11.5

    1.25  

    4.12

    1.8.1 and 1.11.5

    1.25 and 1.26

    4.13

    1.9.6 and 1.11.5

    1.26 and 1.27

    4.13, 4.14, and 4.15

    1.10.1 and 1.11.5

    1.27 and 1.28

    4.14 and 4.15

    1.11.1 and 1.11.5

    1.28, 1.29, and 1.30

    4.15

    1.11.2 and 1.11.5

    1.30

    4.15 and 4.16

    1.11.4 and 1.11.5

    1.31

    4.17

    Nginx Ingress Controller version

    Supported Kubernetes version

    OpenShift version

    1.7.0 and 1.11.5

    1.25  

    4.12

    1.8.1 and 1.11.5

    1.25 and 1.26

    4.13

    1.9.6 and 1.11.5

    1.26 and 1.27

    4.13, 4.14, and 4.15

    1.10.1 and 1.11.5

    1.27 and 1.28

    4.14 and 4.15

    1.11.1 and 1.11.5

    1.28, 1.29, and 1.30

    4.15

    1.11.2 and 1.11.5

    1.30

    4.15 and 4.16

    1.11.4 and 1.11.5

    1.31

    4.17

    Ingress Controller is deployed in the ingress-nginx namespace and an external load balancer is created. A network load balancer is provisioned in GCP by the Ingress Controller.

  2. To view the network load balancer, in the Google Cloud Console, navigate to the LOAD BALANCERS console.

 

Some content is unavailable due to permissions.

 

To Update the Ingress ConfigMap

Customize the NGINX configuration by updating the Ingress ConfigMap.

  1. Edit the Ingress ConfigMap by using the following command:

    kubectl edit  cm -n ingress-nginx ingress-nginx-controller
  2. Specify the following parameter values as shown in the example:

    data:
    
      enable-underscores-in-headers: "true"
      
      annotations-risk-level: Critical (only applicable for Ingress version 1.12.1)
      
      proxy-body-size: 250m
    
      server-name-hash-bucket-size: "1024"
    
      ssl-redirect: "false"
    
      use-forwarded-headers: "true"

To set up the NFS server

  1. Provision a CentOS virtual machine with the required disk space. 
  2. Run the following commands in the given order to set up the NFS:

    sudo yum install -y nfs-util

    sudo systemctl start nfs-server rpcbind

    sudo systemctl enable nfs-server rpcbind

    sudo mkdir /data1

    sudo chmod 777 /data1/

    sudo vi /etc/exports

    /data1 *(rw,sync,no_root_squash,insecure)

    sudo exportfs -rav
  3. Verify that the mount is accessible by using the following command:

    showmount -e <mount IP address>
  4. Open the firewall access to the following ports:

    • tcp:111
    • tcp:2049
    • tcp:20048
    • tcp:36779
    • tcp:39960
    • tcp:46209
    • tcp:48247

    Use the following command as an example:

     gcloud compute firewall-rules create nfs \
    --allow=tcp:111,udp:111,tcp:2049,udp:2049 --target-tags=nfs
  5. Create a PSP and Kubernetes provisioner. 
  6. Configure the NFS folder and give permissions that are detailed in Example-configuration-files-for-BMC-Helix-Continuous-Optimization.

 

 

Some content is unavailable due to permissions.

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*