Moviri – K8s (Kubernetes) Prometheus Extractor


“Moviri Integrator for BMC Helix Continuous Optimization – k8s (Kubernetes) Prometheus” is an additional component of BMC BMC Helix Continuous Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments.  Relevant capacity metrics are loaded into BMC Helix Continuous Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View. 

The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore, the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.

The documentation is targeted at BMC BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC BMC Helix Continuous Optimization and Kubernetes.

Information

If you used the  “Moviri Integrator for  BMC Helix Continuous Optimization– k8s Prometheus” before, please expect the following change from version 20.02.01:

  1. A new entity type "pod workload" will be imported to replace pods. If you imported pods before, all the Kubernetes pods will be gone and left in All systems and business drivers → "Unassigned".
  2. A new naming convention is implemented for Kubernetes Controller replicasets (the suffix of the name will be removed, eg: replicaset-abcd will be named as replicaset). All the old controller will be replaced by the new controller with the new name, and the old ones will be left in All systems and business drivers → "Unassigned".

Step I. Complete the pre-configuration tasks

Steps

Details
Check the required API version is supported
  • Kubernetes versions 1.5 to 1.32
  • Rancher 1.6 to 2.3 (when managing a Kubernetes cluster version 1.5 to 1.21)
  • Openshift 4 (Latest tested version 4.21)
  • Google Kubernetes Engine (GKE)
  • Amazon Kubernetes Service (EKS)
  • Azure Kubernetes Service (AKS)
  • Pivotal Kubernetes Service (PKS)
  • Prometheus versions 2 & 3

Check supported versions of data source configuration

The integrator requires the Prometheus monitoring component to be correctly monitoring the various entities supported by the integration via the following services:

  • kube-state-metrics
  • node-exporter
  • kubelet
  • (if import jvm metrics) JMX exporter

The connector has also the option to access to the Kubernetes API. Access to Kubernetes API is not mandatory to configure the ETL, but it is strongly suggested. Without the access to Kubernetes API, the integrator will not be able to import the following information

  • Persistent volumes capacity-relevant metrics
  • Kubernetes Labels for replicaset and replicasetcontroller
  • Kubernetes Annotations

When using Racher, please make sure not to have Rancher Monitoring enabled along side with kube-state-metrics. The Integrator does not support Rancher Monitoring.

Only one service monitoring Kubernetes components should be active at the same time.

Generate access to Prometheus API

The access to Prometheus API depends on the Kubernetes distribution and the Prometheus configuration. The following sections describe the standard procedure for the supported platforms.
 

OpenShift

OpenShift Container Platform Monitoring ships with a Prometheus instance for cluster monitoring and a central Alertmanager cluster. You can get the addresses for accessing Prometheus, Alertmanager, and Grafana web UIs by running:

oc -n openshift-monitoring get routes

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD alertmanager-main alertmanager-main-openshift-monitoring.apps.moviri-okd415.moviri-integrations.com /api alertmanager-main web reencrypt/Redirect None prometheus-k8s prometheus-k8s-openshift-monitoring.apps.moviri-okd415.moviri-integrations.com /api prometheus-k8s web reencrypt/Redirect None prometheus-k8s-federate prometheus-k8s-federate-openshift-monitoring.apps.moviri-okd415.moviri-integrations.com /federate prometheus-k8s web reencrypt/Redirect Nonethanos-querier thanos-querier-openshift-monitoring.apps.moviri-okd415.moviri-integrations.com /api thanos-querier web reencrypt/Redirect None

Or you can access from Openshift Interface under "Networking" → "Routes" tab or "Networking" → "Services" tab from the cluster console.

Make sure to append https:// to these addresses. You cannot access web UIs using unencrypted connection.

Authentication is performed against the OpenShift Container Platform identity and uses the same credentials or means of authentication as is used elsewhere in OpenShift Container Platform. You need to use a role that has read access to all namespaces, such as the cluster-monitoring-view cluster role.

Rancher

When installing Prometheus from the Catalog Apps, the default configuration sets up a Layer 7 ingress using xip.io. From the Load Balancing tab, you can see the endpoint to access Prometheus.

Google Kubernetes Engine

When installing Prometheus from the Google Cloud using ingress, the Prometheus endpoint will be the type of load balancer or ingress. Access the endpoint on "Service & Ingress" type, and you can see the endpoint to access Prometheus.

image-20250321-143202.png

Screenshot showing Ingresses in GKE

Kubectl

Kubernetes commandline tool kubectl can also be used to access Prometheus API. using the following command, and replace the namespace with the namespace where Prometheus pod is running, and the external IP address will show the url of Prometheus

kubectl get service -n <namespace>

Other Distributions

Prometheus does not directly support authentication for connections to the Prometheus expression browser and HTTP API. If you would like to enforce basic authentication for those connections, Prometheus documentation recommends using Prometheus in conjunction with a reverse proxy and applying authentication at the proxy layer. Please refer to the official Prometheus documentation for configuring a NGINX reverse proxy with basic authentication.

Verify Prometheus API access

To verify, execute the following command from one of the BMC Helix Continuous Optimization servers:

When authentication is not required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info

When basic authentication is required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info -u <username>

When bear token authentication is required

curl -k 'https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info' -H 'Authorization: Bearer <token>'

When OAuth authentication is required, first you need a token from OAuth

curl -vk 'https://<oauth_url>:<port>/oauth/authorize?client_id=openshift-challenging-client&response_type=token -u <oauth_username> -H "X-CSRF-Token:xxx" -H "accept:text/plain" *** OMITTED RESPONSE *** < Location: https://<oauth_url>:<port>/oauth/token/implicit#access_token=BM0btgKP00TGPjLBBW-Y_Hb7A2wdSt99LAGtqh7QiJw&expires_in=86400&scope=user%3Afull&token_type=Bearer*** OMITTED RESPONSE ***

We look at the Response Header "Location" for the access token generated. In the response example above, the token starts with BM0btg...

Then with the new access token you can follow the same steps as the bearer token and replace the "Token" with our new access token.

curl -k 'https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info' -H 'Authorization: Bearer <OAuth Access Token>'

Generate Access to Kubernetes API (Optional)

Access to Kubernetes API is optional, but highly recommended. Follow the following steps for Kubernetes API access on Openshift and GKE:

To access the Kubernetes API, the Kubernetes connector uses a Service Account. The authentication will be performed using the service account token. Additionally, in order to prevent accidental changes, the integrator service account will be granted read-only privileges and will be allowed to query a set of specific API endpoints. Here follows an example procedure to create the service account in a Kubernetes cluster using the kubectl CLI.

Create a Service Account

For K8s versions 1.24+ use this yaml to create a Service account using RBAC.

###############################################################################
# 1) Service Account
###############################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bhco-viewer
  namespace: default
---
###############################################################################
# 2) Secret for the Service Account
###############################################################################
apiVersion: v1
kind: Secret
metadata:
  name: bhco-viewer-secret
  namespace: default
  annotations:
    kubernetes.io/service-account.name: bhco-viewer
type: kubernetes.io/service-account-token
---
###############################################################################
# 3) Cluster role new
###############################################################################
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: bhco-viewer-role
rules:
  - apiGroups:
    - ""
    resources:
    - limitranges
    - namespaces
    - nodes
    - persistentvolumeclaims
    - persistentvolumes
    - pods
    - replicationcontrollers
    - resourcequotas
    verbs:
    - get
    - list
  - apiGroups:
    - apps
    resources:
    - deployments
    - replicasets
    - statefulsets
    verbs:
    - get
    - list
  - apiGroups:
    - autoscaling
    resources:
    - HorizontalPodAutoscalers
    verbs:
    - get
    - list
  - apiGroups:
    - monitoring.coreos.com
    resources:
    - prometheuses
    - prometheuses/api
    verbs:
    - get
    - list
---
###############################################################################
# 4) Cluster role binding
###############################################################################
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: bhco-viewer-binding
subjects:
  - kind: ServiceAccount
    name: bhco-viewer
    namespace: default
roleRef:
  kind: ClusterRole
  name: bhco-viewer-role
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f ServiceAccount.yml ### To get token for Authentication

kubectl describe secret bhco-viewer-secret

 

Verify Kubernetes API access

To verify, execute the curl Linux command from one of the BMC Helix Continuous Optimization servers:

curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k -H "Authorization: Bearer <KUBERNETES_CONNECTOR_TOKEN>"

You should get a JSON document describing the nodes comprising the Kubernetes cluster.

{ "kind": "NodeList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/nodes", "resourceVersion": "37317619" }, "items": [ {"metadata": {

OAuth Authentication & Authorization (OpenShift Only)

When OAuth authentication is required, first you need a token from OAuth

curl -vk 'https://<oauth_url>:<port>/oauth/authorize?client_id=openshift-challenging-client&response_type=token -u <oauth_username> -H "X-CSRF-Token:xxx" -H "accept:text/plain" *** OMITTED RESPONSE *** > Location: https://<oauth_url>:<port>/oauth/token/implicit#access_token=BM0btgKP00TGPjLBBW-Y_Hb7A2wdSt99LAGtqh7QiJw&expires_in=86400&scope=user%3Afull&token_type=Bearer*** OMITTED RESPONSE ***

We look at the Response Header "Location" for the access token generated. In the response example above, the token starts with BM0btg...

Then with the new access token you can follow the same steps above replacing the "Token" with our new access token.

curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k -H "Authorization: Bearer <OAUTH_ACCESS_TOKEN>"

 

Step II. Configure the ETL

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection
  3. On the Run Configuration tab, select Moviri - k8s Prometheus Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.

image-20250321-152027.png

  1. Click the Entity catalog tab, and select one of the following options:

    1. Shared Entity Catalog:

      1. From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    2. Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
  2. Click the Connection tab, and configure the following properties:
 
Property NameValue TypeRequired?DefaultDescription
Prometheus – API URLStringYesBlank

Prometheus API URL (http/https://hostname:port).

Port can be Omitted.

Prometheus – API VersionStringYesv1Prometheus API version, this should be the same as the Kubernetes API version if using any.
Prometheus – API Authentication MethodStringYesNo Authentication

Prometheus API authentication method. These are the methods that are supported:

  • Authentication Token (Bearer)
  • Basic Authentication (username/password)
  • None (no authentication)
  • OpenShift OAuth (Username/Password)
  • Generic OAuth (Username/Password)
  • Key Vault (Microsoft Azure Client ID/Secret)
  • Client Certificate (PKSC12 Keystore/TrustStore)
Prometheus – UsernameStringNoBlankPrometheus API username if the Authentication method is set to Basic Authentication.
Prometheus – PasswordPasswordNoBlankPrometheus API password if the Authentication method is set to Basic Authentication.
Prometheus – API Authentication TokenPasswordNoBlankPrometheus API Authentication Token (Bearer Token) if the Authentication method is set to Authentication Token.
OAuth - URLStringNoBlank

URL for the OpenShift OAuth (http/https://hostname:port). Only available if Authentication Method is set to "OpenShift OAuth".

Port can be emitted.

OAuth - UsernameStringNoBlankOpenShift OAuth username if the Authentication Method is set to OpenShift OAuth.
OAuth - PasswordStringNoBlankOpenShift OAuth password if the Authentication Method is set to OpenShift OAuth.
Generic OAuth - URLStringNoBlank

URL for the OAuth (http/https://hostname:port). Only available if Authentication Method is set to "Generic OAuth".

Port can be emitted.

Generic OAuth - Client IDStringNoBlankGeneric OAuth Client ID if the Authentication Method is set to Generic OAuth.
Generic OAuth - Client PasswordPasswordNoBlankGeneric OAuth Client password if the Authentication Method is set to Generic OAuth.
(Optional) Generic OAuth - Resource ServerStringNoBlankGeneric OAuth Resource Server if the Authentication Method is set to Generic OAuth.
MS Azure Key Vault - HostStringNoBlankThe Azure Key Vault host name when the Authentication Method Key Vault is selected.
MS Azure Key Vault - Secret NameStringNoBlankThe Azure Key Vault secret when the Authentication Method Key Vault is selected.
Client Certificate - Keystore pathStringNoBlankThe Java Keystore path located on the Scheduler running the ETL. Only available when the Authentication Method
Client Certificate - Keystore passwordPasswordNochangeitThe password for the Java Keystore needed when Client Certificate is selected as the Authentication Method.
Client Certificate - Truststore pathStringNoBlankThe Java Truststore path located on the Scheduler running the ETL. Only available when the Authentication Method
Client Certificate - Truststore passwordPasswordNochangeitThe password for the Java Truststore needed when Client Certificate is selected as the Authentication Method.
Prometheus – Use Proxy ServerBooleanNoBlank

If a proxy server is used when chose either Basic Authentication or None.

Proxy sever supports HTTP.

Proxy server only support Basic Authentication and None Authentication.

Prometheus - Proxy Server HostStringNoBlankProxy server host name.
Prometheus - Proxy Server PortNumberNoBlankProxy server port. Default 8080.
Prometheus - Proxy UsernameStringNoBlankProxy server username
Prometheus - Proxy PasswordStringNoBlankProxy server password
Use Kubernetes APIBooleanYesBlankIf use Kubernetes API or not.
Kubernetes - API HostStringYesBlank

Kubernetes API server host name

For Openshift, use the Openshift console FQDN (e.g., console.ose.bmc.com).

Kubernetes - API PortNumberYesBlank

Kubernetes API server port

For Openshift, use the same port as the console (typically 8443).

Kubernetes - API ProtocolStringYesHTTPSKubernetes API protocol, "HTTPS" in most cases
Kubernetes - API Authentication TokenPasswordNoBlank

Token of the integrator service account (see data source configuration section).

If the Authentication Method is set to OpenShift OAuth, then you do not need to put a token in this field. Please make sure the User account has the right permissions specified in the data source configuration section for Kubernetes API access.

  1. Click the Kubernetes Extraction tab, and configure the following properties:
 
Property NameValue TypeRequired?DefaultDescription
Data ResolutionStringYes1 HourData resolution for data to be pulled from Prometheus into BHCO.
Cluster NameStringNoBlankIf Kubernetes API is not in use, cluster name must be specified.
Default Last CounterStringYes Default earliest time the connector should be pulling data from in UTC. Format as YYYY-MM-DDTHH24:MI:SS.SSSZ, for example, 2019-01-01T19:00:00.000Z.
Lag Hour to the Current TimeStringNo Lag hour to the current time
Extract PODWorkloadBooleanYesNoIf want to import podworkload
Filter PodWorkload using namespaceBooleanNoNoIf extracting PodWorkload, choose to filter pod workloads using namespaces. This uses a semi-colon separated list, and only uses exact matching.
Filter PodWorkloads using namespace allowlist/denylistStringNodenylistIf filtering PodWorkload, choose to filter using a denylist or allowlist.
Import only PODWorkloads not in given namespaces (semi-colon separated list)StringNo 

If filtering using denylist, don't import data for pod workloads in given namespace denylist.

If filtering using allowlist, only import data for pod workloads in given namespace allowlist.

All other data for given namespaces will still be imported. When empty, don't filter pod workloads.

Extract Controller MetricsBooleanYesYesIf the user wants to import Controller (Deployment, DaemonSet, ReplicaSet..) metrics. If POD Workload and Controller are both not selected, the ETL will not create any Controller metrics. if Controllers are not selected and POD Workload are selected for import, the ETL will create empty Controller entities.
Maximum Hours to ExtractStringNo120Maximum hours the connector can pull data from. If leave empty, using default 5 days from default last counter. Please consider that if no data is found on Prometheus, the integration will update the last counter to the maximum extraction period of consideration, starting from the last counter.
Import by image metric on all entities?StringNoNoIf want to import BYIMAGE metrics on cluster, namespace, controller and nodes
Import podworkload metric highmark counters?StringNoNoIf want to import Highmark metrics for CPU_USED_NUM, MEM_USED on podworkload for by container metrics
Import annotationsBooleanNoNoIf want to import select Kubernetes Annotations as labels. Requires Kubernetes API is used.
Annotations to import (semi-colon separated list)StringNo Allowlist of Kubernetes Annotations to import as labels
Import only the following tags (semi-colon separated list)StringNo 

Import only tag types (Kubernetes Label key).

Specify the keys of the labels. They can be in the original format appears on Kubernetes API, or they can be using underscore (_) as delimiter. For example, node_role_kubernetes_io_compute and node-role.kubernetes.io/compute are equivalent and will be imported as node_role_kubernetes_io_compute.
Multiple tag types can be separated by semicolon (;).

Enable Recovery ModeBooleanNoNoOnly import data for entities between starting and ending timestamps. Object relationship data for this time period will not be collected.
Start timestamp of recovery modeStringYes Starting timestamp of extraction when recovery mode is enabled.
End timestamp of recovery modeStringYes Ending timestamp of extraction when recovery mode is enabled.
Partition Size (Default: 400, Max: 999)StringNo400Size of partition used for aggregation controller data.
Create UID configuration metricsBooleanNoNoChoose if configuration metrics containing UID value of entities will be created, if Kube API is also enabled.
Extract additional max statistics for CPU/MEMORY QUOTA metrics for LIMITS/REQUESTSBooleanNoNoChoose to import data for CPU_QUOTA_LIMIT, CPU_QUOTA_REQUEST, MEM_QUOTA_LIMIT, MEM_QUOTA_REQUEST using max aggregation as a second statistic.

The following image shows a Run Configuration example for the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus":

 

image-20250321-165337.png

  1. Click on the Import Filters tab, and configure the following property, if needed:
Property NameValue TypeRequired?DefaultDescription
Path to filtering configuration fileStringNo Specify the path on the Remote ETL Engine where the file containing the filtering configuration is located. An example of how the filtering file should be formatted can be found here.
  1. (Optional) Override the default values of the properties:

Run Configuration

Property

Description

Module selection

Select one of the following options:

  • Based on datasource: This is the default selection.
  • Based on Open ETL template: Select only if you want to collect data that is not supported by TrueSight Capacity Optimization.
Module descriptionA short description of the ETL module.
Execute in simulation modeBy default, the ETL execution in simulation mode is selected to validate connectivity with the data source, and to ensure that the ETL does not have any configuration issues. In the simulation mode, the ETL does not load data into the database. This option is useful when you want to test a new ETL task. To run the ETL in the production mode, select No.
BMC recommends that you run the ETL in the simulation mode after ETL configuration and then run it in the production mode.
 

Object Relationship

PropertyDescription
Associate new entities to

Specify the domain to which you want to add the entities created by the ETL.

Select one of the following options:

  • Existing domain: This option is selected by default. Select an existing domain from the Domain list. If the selected domain is already used by other hierarchy rules, select one of the following Domain conflict options:

    • Enrich domain tree: Select to create a new independent hierarchy rule for adding a new set of entities, relations, or both that are not defined by other ETLs.
    • ETL Migration: Select if the new ETL uses the same set of entities, relations, or both that are already defined by other ETLs.
  • New domain: Select a parent domain, and specify a name for your new domain.

By default, a new domain with the same ETL name is created for each ETL. When the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created in the active state. If you specify a different domain for the ETL, the hierarchy rule is updated automatically.

 

ETL Task Properties

 
 
 
 

Property

Task group

Description

Select a task group to classify the ETL.

Running on scheduler

Select one of the following schedulers for running the ETL:

  • Primary Scheduler: Runs on the Application Server.
  • Generic Scheduler: Runs on a separate computer.
  • Remote: Runs on remote computers.
Maximum execution time before warningIndicates the number of hours, minutes, or days for which the ETL must run before generating warnings or alerts, if any.
Frequency

Select one of the following frequencies to run the ETL:

  • Predefined: This is the default selection. Select a daily, weekly, or monthly frequency, and then select a time to start the ETL run accordingly.
  • Custom: Specify a custom frequency, select an appropriate unit of time, and then specify a day and a time to start the ETL run.
 
 

(Optional) B. Configuring the advanced properties

You can configure the advanced properties to change the way the ETL works or to collect additional metrics

To configure the advanced properties:

  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:
  1. Click Save.

The ETL tasks page shows the details of the newly configured Prometheus ETL:

image-20250321-210759.png

Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.

image-20250326-205019.png

  1. In the Run configurations table, click the pencil icon to modify the ETL configuration settings.
  2. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  3. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  4. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  5.  If the ETL run status is Warning, Error, or Failed:

    1. On the ETL tasks page, click the pencil icon in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click the pencil icon to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit Task. The ETL details are displayed.

image-20250327-201501.png

  1. On the Edit task page, do the following, and click Save:

    1. Specify a unique name and description for the ETL task.
    2. In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    3. Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    4. Select the task group and the scheduler to which you want to assign the ETL task.
  2. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the database.

Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the k8s Prometheus data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.

To verify that the k8s Prometheus data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > k8s Prometheus > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Prometheus instances.
  4. Click a k8s Prometheus entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.

K8s Prometheus Workplace Entities

TSCO EntitiesPrometheus Entity
Kubernetes ClusterCluster
Kubernetes NamespaceNamespace
Kubernetes NodeNode
Kubernetes Pod WorkloadAn aggregated group of pods running on the same controller; stand-alone static pods
Kubernetes ControllerDaemonSet, ReplicaSet, StatefulSet, ReplicationController
Kubernetes Consistent VolumeConsistent Volume, Consistent Volume Claim

Entity Relationship

TSCO EntitiesRelationship TypeDescription
Kubernetes ClusterROOTAPPROOTAPP
Kubernetes NamespaceKC_CONTAINS_KNSKubernetes Cluster contains Kubernetes Namespace
Kubernetes NodeKC_CONTAINS_KNKubernetes Cluster contains Kubernetes Node
Kubernetes Pod WorkloadKNS_CONTAINS_PODWKKubernetes Namespace contains Pod Workload
Kubernetes ControllerKNS_CONTAINS_KDKubernetes Namespace contains Kubernetes Deployment
Kubernetes Persistent VolumeKC_CONTAINS_KPVKubernetes Cluster contains Kubernetes Persistent Volume

Hierarchy

The connector is able to replicate relationships and logical dependencies among these entities as they are found configured within the Kubernetes cluster.

In particular, the following structure is applied:

  • a Kubernetes Cluster is attached to the root of the hierarchy
  • each Kubernetes Cluster contains its own Nodes, Namespaces and Persistent Volumes
  • each Kubernetes Namespace contains its own Controllers and Stand along pod workloads
  • each Kubernetes Namespace contains persistent volume via persistent volume claim relationship
  • each Kubernetes Controller contains it pod workloads, which is an aggregated entitiy that contains a group of pods that are running on the same controller

image-20250428-131709.png

Hierarchy showing Cluster > Namespace > Deployments > Pod Workload

image-20250428-131803.png

Hierarchy Showing Cluster > PVs & Nodes

Lookup Field Considerations

Entity TypeStrong Lookup FieldOthers
Kubernetes - ClusterKUBE_CLUSTER&&KUBE_TYPE 
Kubernetes - NamespaceKUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME 
Kuberneteds - NodeKUBE_CLUSTER&&KUBE_TYPE&&HOSTNAME&&NAME_COMPATIBILITY_
Kubernetes - ControllerKUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME&&KUBE_DP_NAME 
Kubernetes - Pod WorkloadKUBE_CLUSTER&&KUBE_NS_NAME&&KUBE_DP_NAME&&KUBE_DP_TYPE&&KUBE_WL_NAME&&KUBE_TYPE 
Kubernetes - Persistent VolumeKUBE_CLUSTER&&KUBE_TYPE&&KUBE_PV_NAME 

Tag Mapping (Optional)

Here’s an example for what tag looks like:

image-20250428-133451.png

k8s Heapster to k8s Prometheus Migration

The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” supports a seamless transition from entities and metrics imported by the “Moviri Integrator forBMC Helix Continuous Optimization – k8s Heapster”. Please follow these steps to migrate between the two integrators:

  1. Stop “Moviri Integrator for BMC Helix Continuous Optimization – k8s Heapster” ETL task.
  2. Install and configure the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus”, ensuring that the lookup is shared with the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Heapster” ETL task.
  3. Start “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” ETL task.

Pod Optimization - Pod Workloads replace Pods

The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” introduce a new entity "pod workload" from v20.02.01. Pod Workload is an aggregated entity that aggregate a group of pods that are running on the same controller. pod workload is the direct child of the controller that the pods are running on. Pod workload will use the same name as the parent controller. Pods at the same time will be dropped.

Common Issues

Error Messages / Behaviors

 

Cause

Solve

Query errors HTTP 422 Un-processable Entity

You will see these errors shows sometimes. The number of this error messages can vary a lot for each run.

This is usually caused by Prometheus rebuilding or restarting. Right after the Prometheus's rebuilding or reloading, there are couple of days you will see this error showing. They usually goes away organically as the Prometheus running more stable.

They usually goes away organically as the Prometheus running more stable.
Prometheus is running fine but no data is pulledThis usually caused by the last counter is set too far from today's date. Prometheus has a data retention periods which has a default value 15 days, and it can be configured. If the ETL is set to extracting data passed the data retention period, there's not gonna be any data. Prometheus's status page will show the data retention value in "storage retention" field.Modify the default last counter to a more recent date.
504 Gateway TimeoutsThese 504 Timeout query error (server didn't respond in time) is related to the route timeout is being used on Openshift. This can be configured on a route-to-route basis. For example, the Prometheus route can be increased to the 2min timeout that is also configured on the Prometheus backend. Please follow this link to understand what is the configured timeout and how can it be increased
https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html
Increase timeout period from Openshift side

Data Verification

 

The following sections provide some indications on how to verify on Prometheus if all the pre-requisites are in place before starting collecting data

Verify Prometheus Build information

Verify the Prometheus "Last successful configuration reload" (from Prometheus UI, check "Status > Runtime & Build Information")
If the "Last successful configuration reload" is reporting less then 3 days, ask the customer to evaluate the status of the integration in the next 2/3 days

Verify the Status of Prometheus Target Services

Verify the status of Prometheus Target (from the Prometheus UI, check "Status > Targets"

  • Check the status of "node-exporter" (there should be 1 instance running for each node in the cluster)
  • Check the status of "kube-state-metrics" (there should be at least 1 instance running)
  • Check the status of "kubelet" (there should be at least 1 instance running for each node in the cluster)

Verify data availability in Prometheus Tables

Verify if the following Prometheus tables contain data (from the Prometheus Ul)

  • "kube_pod_container_info" when missing Pod Workload, Controller, Namespace (but also Cluster and Node for Requests and Limits metrics)
  • "kube_node info" when missing Node and Cluster metrics.

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC Helix Continuous Optimization 26.1