Moviri Integrator for TrueSight Capacity Optimization – K8s (Kubernetes) Prometheus

“Moviri Integrator for TrueSight Capacity Optimization – k8s (Kubernetes) Prometheus” is an additional component of BMC TrueSight Capacity Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments.  Relevant capacity metrics are loaded into BMC TrueSight Capacity Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View. 

The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore, the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.

The documentation is targeted at BMC TrueSight Capacity Optimization administrators, in charge of configuring and monitoring the integration between BMC TrueSight Capacity Optimization and Kubernetes.


The latest version of the integrator k8s (Kubernetes) Prometheus is available on EPD. Click the Moviri Integrator for TrueSight Capacity Optimization link. In the Patches tab, select the latest version of TrueSight Capacity Optimization. This version of the connector compatible with BMC TrueSight Capacity Optimization 11.5 and onward.


If you used the  “Moviri Integrator for  TrueSight Capacity Optimization– k8s Prometheus” before, please expect the following change from version 20.02.01:

  1. A new entity type "pod workload" will be imported to replace pods. If you imported pods before, all the Kubernetes pods will be gone and left in All systems and business drivers → "Unassigned".
  2. A new naming convention is implemented for Kubernetes Controller replicasets (the suffix of the name will be removed, eg: replicaset-abcd will be named as replicaset). All the old controller will be replaced by the new controller with the new name, and the old ones will be left in All systems and business drivers "Unassigned".



Step I. Complete the preconfiguration tasks

Step II. Configure the ETL

Step III. Run the ETL

Step IV. Verify the data collection

k8s Heapster to k8s Prometheus Migration

Pod Optimization - Pod workloads replace pods

Common issues


Step I. Complete the preconfiguration tasks

StepsDetails
Check the required API version is supported
  • Kubernetes versions 1.5 to 1.21
    • Kubernetes versions 1.16+ use TLS 1.3 by default and there is a known issue with the Java implementation. Please contact BMC support for help setting up connections to these versions.
  • Rancher 1.6 to 2.3 (when managing a Kubernetes cluster version 1.5 to 1.21)
  • Openshift 3.11 to 4.8, 4.10, 4.12
  • Google Kubernetes Engine (GKE)
  • Amazon Kubernetes Service (EKS)
  • Azure Kubernetes Service (AKS)
  • Pivotal Kubernetes Service (PKS)
  • Prometheus 2.7 and onward
Check supported versions of data source configuration

The integrator requires the Prometheus monitoring component to be correctly monitoring the various entities supported by the integration via the following services:

  • kube-state-metrics
  • node-exporter
  • kubelet
  • (if import jvm metrics) JMX exporter

The connector has also the option to access to the Kubernetes API. Access to Kubernetes API is not mandatory to configure the ETL, but it is strongly suggested. Without the access to Kubernetes API, the integrator will not be able to import the following information

  • Persistent volumes capacity-relevant metrics

  • Kubernetes Labels for replicaset and replicasetcontroller


When using Racher, please make sure not to have Rancher Monitoring enabled along side with kube-state-metrics. The Integrator does not support Rancher Monitoring.

Only one service monitoring Kubernetes components should be active at the same time.

Generate access to Prometheus API

The access to Prometheus API depends on the Kubernetes distribution and the Prometheus configuration. The following sections describe the standard procedure for the supported platforms.

OpenShift

OpenShift Container Platform Monitoring ships with a Prometheus instance for cluster monitoring and a central Alertmanager cluster. You can get the addresses for accessing Prometheus, Alertmanager, and Grafana web UIs by running:

$ oc -n openshift-monitoring get routes


NAME                HOST/PORT                                          ...
alertmanager-main   alertmanager-main-openshift-monitoring.apps.url.openshift.com ...

grafana             grafana-openshift-monitoring.apps.url.openshift.com           ...

prometheus-k8s      prometheus-k8s-openshift-monitoring.apps.url.openshift.com    ...

Or you can access from Openshift Interface under "Networking" → "Routes" tab or "Networking" → "Services" tab from the cluster console.

Make sure to append https:// to these addresses. You cannot access web UIs using unencrypted connection.

Authentication is performed against the OpenShift Container Platform identity and uses the same credentials or means of authentication as is used elsewhere in OpenShift Container Platform. You need to use a role that has read access to all namespaces, such as the cluster-monitoring-view cluster role. 

Rancher

When installing Prometheus from the Catalog Apps, the default configuration sets up a Layer 7 ingress using xip.io. From the Load Balancing tab, you can see the endpoint to access Prometheus.

Google Kubernetes Engine

When installing Prometheus from the Google Cloud using ingress, the Prometheus endpoint will be the type of load balancer or ingress. Access the endpoint on "Service & Ingress" type, and you can see the endpoint to access Prometheus.


Kubectl

Kubernetes commandline tool kubectl can also be used to access Prometheus API. using the following command, and replace the namespace with the namespace where Prometheus pod is running, and the external IP address will show the url of Prometheus

kubectl get service -n <namespace>


Other Distributions

Prometheus does not directly support authentication for connections to the Prometheus expression browser and HTTP API. If you would like to enforce basic authentication for those connections, Prometheus documentation recommends using Prometheus in conjunction with a reverse proxy and applying authentication at the proxy layer. Please refer to the official Prometheus documentation for configuring a NGINX reverse proxy with basic authentication.

Verify Prometheus API access

To verify, execute the following command from one of the TrueSight Capacity Optimization servers:

When authentication is not required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info

When basic authentication is required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info -u <username>

When bear token authentication is required

curl -k 'https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info' -H 'Authorization: Bearer <token>'

When OAuth authentication is required, first you need a token from OAuth

curl -vk 'https://<oauth_url>:<port>/oauth/authorize?client_id=openshift-challenging-client&response_type=token -u <oauth_username> -H "X-CSRF-Token:xxx" -H "accept:text/plain"
*** OMITTED RESPONSE ***
< Location: https://<oauth_url>:<port>/oauth/token/implicit#access_token=BM0btgKP00TGPjLBBW-Y_Hb7A2wdSt99LAGtqh7QiJw&expires_in=86400&scope=user%3Afull&token_type=Bearer
*** OMITTED RESPONSE ***

We look at the Response Header "Location" for the access token generated. In the response example above, the token starts with BM0btg...

Then with the new access token you can follow the same steps as the bearer token and replace the "Token" with our new access token.

curl -k 'https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info' -H 'Authorization: Bearer <OAuth Access Token>'
Generate Access to Kubernetes API (Optional)

Access to Kubernetes API is optional, but highly recommended. Follow the following steps for Kubernetes API access on Openshift and GKE:

To access the Kubernetes API, the Kubernetes connector uses a Service Account. The authentication will be performed using the service account token. Additionally, in order to prevent accidental changes, the integrator service account will be granted read-only privileges and will be allowed to query a set of specific API endpoints. Here follows an example procedure to create the service account in a Kubernetes cluster using the kubectl CLI.

Create a Service Account

First of all, create the service account to be used by the Kubernetes connector:

$ kubectl create serviceaccount tsco

Then, describe the service account to discover which secret is associated to it:

$ kubectl describe serviceaccount tsco


Name: tsco
Namespace: default

Labels: <none>

Annotations: <none>

Image pull secrets: <none>

Mountable secrets: tsco-token-6x9vs

Tokens: tsco-token-6x9vs


Now, describe the secret to get the corresponding token:

kubectl describe secret tsco-token-6x9vs


Name: tsco-token-6x9vs

Namespace: default

Labels: <none>

Annotations: kubernetes.io/service-account.name=tsco

kubernetes.io/service-account.uid=07bca5e7-7c3e-11e7-87bc-42010a8e0002

Type: kubernetes.io/service-account-token

Data

====

ca.crt: 1025 bytes

namespace: 7 bytes

token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRzY28tdG9rZW4tNng5dnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHNjbyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3YmNhNWU3LTdjM2UtMTFlNy04N2JjLTQyMDEwYThlMDAwMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnRzY28ifQ.tA6c9AsVJ0QKD0s-g9JoBdWDfhBvClJDiZGqDW6doS0rNZ5-dwXCJTss97PXfCxd5G8Q_nxg-elB8rV805K-j8ogf5Ykr-JLsAbl9OORRzcUCShYVF1r7O_-ikGg7abtIPh_mE5eAgHkJ1P6ODvaZG_0l1fak4BxZMTVfzzelvHpVlLpJZObd7eZOEtEEEkcAhZ2ajLQoxLucReG2A25_SrVJ-6c82BWBKQHcTBL9J2d0iHBHv-zjJzXHQ07F62vpc3Q6QI_rOvaJgaK2pMJYdQymFff8OfVMDQhp9LkOkxBPuJPmNHmHJxSvCcvpNtVMz-Hd495vruZFjtwYYygBQ


The token data ("eyJhb ... YygBQ") will be used by the Kubernetes integrator to authenticate against the API. Save the token as it will be required at the connector creation time.

Grant the Service Account Read-only Privileges

The following section outlines an example configuration on the Kubernetes cluster that is suggested in order to allow API access to the service account used by the integrator. We provide example configurations for the two most common authorization schemes used in Kubernetes clusters, namely RBAC (Role-Based Access Control) and ABAC (Attribute-Based Access Control). To identify which mode is configured in your Kubernetes cluster, please refer to the official project documentation: https://kubernetes.io/docs/admin/authorization/

RBAC Authorization

RBAC is the authorization mode enabled by default from Kubernetes 1.6 onward. To grant read-only privileges to the connector service account, a new cluster role is created. The new cluster role allows to grant specific read-only privileges to a set of API operations and entities to the connector.

kubectl create -f tsco-cluster-role.yml
kubectl create clusterrolebinding tsco-view-binding --clusterrole=tsco-cluster-role --serviceaccount=default:tsco

Here is an example policy file that can be used for this purpose:

$ cat tsco-cluster-role.yml


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

 namespace: default

 name: tsco-cluster-role

rules:

- apiGroups: [""]

 resources: ["pods", "nodes", "namespaces", "replicationcontrollers", "persistentvolumes", "resourcequotas", "limitranges", "persistentvolumeclaims"]

 verbs: ["get", "list"]

- apiGroups: ["apps"]

 resources: ["deployments", "replicasets", "statefulsets"]

 verbs: ["get", "list"]

- apiGroups: ["autoscaling"]

 resources: ["HorizontalPodAutoscalers"]

 verbs: ["get", "list"]


ABAC Authorization

ABAC authorization grants access rights to users and service accounts via policies that are configured in a policy file. Such file is then used by the Kubernetes API server via the startup parameter --authorization-policy-file.

In order to allow read-only access to the integrator service account, the following policy line need to be appended to the aforementioned policy file:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:default:tsco", "namespace": "*", "resource": "*", "apiGroup": "*", "readonly": true, "nonResourcePath": "*"}}

The apiserver will need to be restarted to pick up the new policy lines.


OAuth Authentication & Authorization (OpenShift Only)

When you implement OAuth Authentication, you cannot use a ServiceAccount type for authentication. You will have to set up a separate User Type and bind the ClusterRole to that User account similar to how we set up the ClusterRoleBinding for the TSCO ServiceAccount above in the RBAC Authorization section. See the tsco-cluster-role.yml file template above for an example.

kubectl create -f tsco-cluster-role.yml
kubectl create clusterrolebinding tsco-view-binding --clusterrole=tsco-cluster-role --user=tsco

Or if the tsco-view-binding exists already, then you will have to edit the tsco-view-binding ClusterRoleBinding:

kubectl edit clusterrolebinding tsco-view-binding

and add to the subjects section under the other kubernetes accounts:

- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: tsco
  namespace: default

This User is from the Identity provider (LDAP/BasicAuth/etc...). If the user was newly created, you will need to log into the Openshift Console once beforehand with the new account for it to be completely registered with the system. In this example we created an user account called 'tsco'.

You can check what users are in the Kubernetes system with these commands:

kubectl get users
kubectl describe user <username>
Verify Kubernetes API access

To verify, execute the curl Linux command from one of the TrueSight Capacity Optimization servers:

$ curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k -H "Authorization: Bearer <KUBERNETES_CONNECTOR_TOKEN>"

You should get a JSON document describing the nodes comprising the Kubernetes cluster.

{
 "kind": "NodeList",
 "apiVersion": "v1",
 "metadata": {
 "selfLink": "/api/v1/nodes",
 "resourceVersion": "37317619"
 },
 "items": [
 {
 "metadata": {

OAuth Authentication & Authorization (OpenShift Only)


When OAuth authentication is required, first you need a token from OAuth


curl -vk 'https://<oauth_url>:<port>/oauth/authorize?client_id=openshift-challenging-client&response_type=token -u <oauth_username> -H "X-CSRF-Token:xxx" -H "accept:text/plain"
*** OMITTED RESPONSE ***
> Location: https://<oauth_url>:<port>/oauth/token/implicit#access_token=BM0btgKP00TGPjLBBW-Y_Hb7A2wdSt99LAGtqh7QiJw&expires_in=86400&scope=user%3Afull&token_type=Bearer
*** OMITTED RESPONSE ***


We look at the Response Header "Location" for the access token generated. In the response example above, the token starts with BM0btg...


Then with the new access token you can follow the same steps above replacing the "Token" with our new access token.

$ curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k -H "Authorization: Bearer <OAUTH_ACCESS_TOKEN>"
Kubernetes 1.16 and onward Workaround

For Kubernetes version 1.16 and onward, there's a known issue related to TLS v1.3 and TrueSight Capacity Optimization Java version incompatibility. We provide a workaround to use the TLS v1.2 instead of default TLS v1.3 for connection.

We implemented a hidden property for using TLSv1.2 in order to work with Kubernetes 1.16 and onward. Add hidden property "prometheus.use.tlsv12" from ETL configuration, and set the value as "true".

If you encounter the erros as cannot connect, there's also PKIX issue need to be fixed. To fix the "PKIX path validation failed: java.security.cert.CertPathValidatorException: signature check failed" error follow these steps.

The TrustStore file was located in the <CPIT BASE>/jre/lib/security directory and called cacerts). These steps need to be completed by the CPIT user, or whatever user installed TSCO.

Step 1: Grab the cert files needed
    Download the OAuth or Prometheus Public Cert, run this command:
    echo -n | openssl s_client -connect <Openshift OAuth / Prometheus URL>:<Port> | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/oauth.crt

  
   Download the Kubernetes API Public Cert, run this command:
    echo -n | openssl s_client -connect <Kubernetes API URL>:<KubeAPI Port> | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/kubeapi.crt  

Step 2: Import the certs to the TSCO EE JRE cacerts TrustStore. Keytool is a bit wonky with relative paths, so I suggest to change directories to  <CPIT BASE>/jre/lib/security
    <CPIT BASE>/jre/bin/keytool -keystore cacerts -importcert -noprompt -trustcacerts -alias oauth -file /tmp/oauth.crt
  <CPIT BASE>/jre/bin/keytool -keystore cacerts -importcert -noprompt -trustcacerts -alias kubeapi -file /tmp/kubeapi.crt


    Default keytool TrustStore password is changeit

Optional: Remove the old certs
    keytool -keystore cacerts -delete -noprompt -alias oauth
  keytool -keystore cacerts -delete -noprompt -alias kubeapi

Optional: If the above alias' does not work you can list the certs in the TrustStore using this command:
    keytool -list -v -keystore cacerts

Step II. Configure the ETL

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection

  3. On the Run Configuration tab, select Moviri - k8s Prometheus Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.


  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog:

      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
  5. Click the Connection tab, and configure the following properties:


Property Name

Value Type

Required?

Default

Description

Prometheus – API URL

String

Yes


Prometheus API URL (http/https://hostname:port).

Port can be emitted.

Prometheus – API Version

String

Yes

v1

Prometheus API version, this should be the same as the Kubernetes API version if using any.

Prometheus – API Authentication Method

String

Yes


Prometheus API authentication method. There are four methods that are supported: Authentication Token (Bearer), Basic Authentication (username/password), None (no authentication), OpenShift OAuth (Username/Password).

Prometheus – Username

String

No


Prometheus API username if the Authentication method is set to Basic Authentication.

Prometheus – Password

String

No


Prometheus API password if the Authentication method is set to Basic Authentication.

Prometheus – API Authentication Token

String

No


Prometheus API Authentication Token (Bearer Token) if the Authentication method is set to Authentication Token.

OAuth - URLStringNo
URL for the OpenShift OAuth (http/https://hostname:port). Only available if Authentication Method is set to "OpenShift OAuth".

Port can be emitted.
OAuth - UsernameStringNo
OpenShift OAuth username if the Authentication Method is set to OpenShift OAuth.
OAuth - PasswordStringNo
OpenShift OAuth password if the Authentication Method is set to OpenShift OAuth.

Prometheus – Use Proxy Server

Boolean

No


If a proxy server is used when chose either Basic Authentication or None.

Proxy sever supports HTTP.

Proxy server only support Basic Authentication and None Authentication.

Prometheus - Proxy Server Host

String

No


Proxy server host name.

Prometheus - Proxy Server Port

Number

No


Proxy server port. Default 8080.

Prometheus - Proxy Username

String

No


Proxy server username

Prometheus - Proxy Password

String

No


Proxy server password

Use Kubernetes API

Boolean

Yes


If use Kubernetes API or not.

Kubernetes Host         

String

Yes

 

Kubernetes API server host name

For Openshift, use the Openshift console FQDN (e.g., console.ose.bmc.com).

Kubernetes API Port

Number

Yes

 

Kubernetes API server port

For Openshift, use the same port as the console (typically 8443).

Kubernetes API Protocol

String

Yes

 HTTPS

Kubernetes API protocol, "HTTPS" in most cases

Kubernetes Authentication Token

String

No

 

Token of the integrator service account (see data source configuration section).

If the Authentication Method is set to OpenShift OAuth, then you do not need to put a token in this field. Please make sure the User account has the right permissions specified in the data source configuration section for Kubernetes API access.

6. Click the Prometheus Extraction tab, and configure the following properties:

Property Name

Value Type

Required

Default

Description

Data Resolution

String

Yes


Data resolution for data to be pulled from Prometheus into TSCO. Default is set to 5 minutes. any value less than 5 minutes will be set to default 5 minutes.

Cluster NameStringNo
If Kubernetes API is not in use, cluster name must be specified.

Default Last Counter

String

Yes


Default earliest time the connector should be pulling data from in UTC. Format as YYYY-MM-DDTHH24:MI:SS.SSSZ, for example, 2019-01-01T19:00:00.000Z.

Lag Hour to the Current TimeStringNo
Lag hour to the current time

Extract PODWorkload

BooleanYesNoIf want to import podworkload

Filter PodWorkload using namespace

BooleanNoNoIf extracting PodWorkload, choose to filter pod workloads using namespaces. This take a semi-colon separated list, and only does exact matching.

Filter PodWorkloads using namespace allowlist/denylist

StringNodenylistIf filtering PodWorkload, choose to filter using a denylist or allowlist.

Import only PODWorkloads not in given namespaces (semi-colon separated list)

StringNo

If filtering using denylist, don't import data for pod workloads in given namespace denylist.

If filtering using allowlist, only import data for pod workloads in given namespace allowlist.

All other data for given namespaces will still be imported. When empty, don't filter pod workloads.

Extract Controller MetricsBooleanYesYesIf the user wants to import Controller (Deployment, DaemonSet, ReplicaSet..) metrics. If POD Workload and Controller are both not selected, the ETL will not create any Controller metrics. if Controllers are not selected and POD Workload are selected for import, the ETL will create empty Controller entities.

Maximum Hours to Extract

String

No

120

Maximum hours the connector can pull data from. If leave empty, using default 5 days from default last counter. Please consider that if no data is found on Prometheus, the integration will update the last counter to the maximum extraction period of consideration, starting from the last counter.

Import by image metric on all entities?

StringNoNoIf want to import BYIMAGE metrics on cluster, namespace, controller and nodes
Import podworkload metric highmark counters?StringNoNoIf want to import Highmark metrics for CPU_USED_NUM, MEM_USED on podworkload for by container metrics
Import annotationsBooleanNoNoIf want to import select Kubernetes Annotations as labels. Requires Kubernetes API is used.
Annotations to import (semi-colon separated list)StringNo
Allowlist of Kubernetes Annotations to import as labels

Import only the following tags (semi-colon separated list)

String

No


Import only tag types (Kubernetes Label key).

Specify the keys of the labels. They can be in the original format appears on Kubernetes API, or they can be using underscore (_) as delimiter. For example, node_role_kubernetes_io_compute and node-role.kubernetes.io/compute are equivalent and will be imported as node_role_kubernetes_io_compute.
Multiple tag types can be separated by semicolon (;).

Enable Recovery Mode

Boolean

No

No

Only import data for entities between starting and ending timestamps. Object relationship data for this time period will not be collected.

Start timestamp of recovery mode

String

Yes


Starting timestamp of extraction when recovery mode is enabled.

End timestamp of recovery mode

String

Yes


Ending timestamp of extraction when recovery mode is enabled.

Partition Size (Default: 400, Max: 999)

String

No

400

Size of partition used for aggregation controller data.

Create UID configuration metrics

Boolean

No

No

Choose if configuration metrics containing UID value of entities will be created, if Kube API is also enabled.

Extract additional max statistics for CPU/MEMORY QUOTA metrics for LIMITS/REQUESTS

Boolean

No

No

Choose to import data for CPU_QUOTA_LIMIT, CPU_QUOTA_REQUEST, MEM_QUOTA_LIMIT, MEM_QUOTA_REQUEST using max aggregation as a second statistic.

The following image shows a Run Configuration example for the “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus":

7. (Optional) Enable TLS v1.2 for Kubernetes 1.16 and above

If you are using Kubernetes 1.16, or Openshift 4 and above, there's an incompatibility between Java and TLS v1.3. We are providing a workaround to use TLSv 1.2 for connection. To add the hidden property:

a. On ETL configuration page's very bottom, there's link to mamually modifying the ETL property:

b. On the manually editing property page, add the property: "prometheus.use.tlsv12"


c. Set the value as "true", and save the change

8. (Optional) Override the default values of the properties:


PropertyDescription
Module selection

Select one of the following options:

  • Based on datasource: This is the default selection.
  • Based on Open ETL template: Select only if you want to collect data that is not supported by TrueSight Capacity Optimization.
Module descriptionA short description of the ETL module.
Execute in simulation modeBy default, the ETL execution in simulation mode is selected to validate connectivity with the data source, and to ensure that the ETL does not have any configuration issues. In the simulation mode, the ETL does not load data into the database. This option is useful when you want to test a new ETL task. To run the ETL in the production mode, select No.
BMC recommends that you run the ETL in the simulation mode after ETL configuration and then run it in the production mode.
PropertyDescription
Associate new entities to

Specify the domain to which you want to add the entities created by the ETL.

Select one of the following options:

  • Existing domain: This option is selected by default. Select an existing domain from the Domain list. If the selected domain is already used by other hierarchy rules, select one of the following Domain conflict options:
    • Enrich domain tree: Select to create a new independent hierarchy rule for adding a new set of entities, relations, or both that are not defined by other ETLs.
    • ETL Migration: Select if the new ETL uses the same set of entities, relations, or both that are already defined by other ETLs.
  • New domain: Select a parent domain, and specify a name for your new domain.

By default, a new domain with the same ETL name is created for each ETL. When the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created in the active state. If you specify a different domain for the ETL, the hierarchy rule is updated automatically.

PropertyDescription
Task groupSelect a task group to classify the ETL.
Running on schedulerSelect one of the following schedulers for running the ETL:
  • Primary Scheduler: Runs on the Application Server.
  • Generic Scheduler: Runs on a separate computer.
  • Remote: Runs on remote computers.
Maximum execution time before warningIndicates the number of hours, minutes, or days for which the ETL must run before generating warnings or alerts, if any.
Frequency

Select one of the following frequencies to run the ETL:

  • Predefined: This is the default selection. Select a daily, weekly, or monthly frequency, and then select a time to start the ETL run accordingly.
  • Custom: Specify a custom frequency, select an appropriate unit of time, and then specify a day and a time to start the ETL run.

(Optional) B. Configuring the advanced properties


You can configure the advanced properties to change the way the ETL works or to collect additional metrics.


To configure the advanced properties:


  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

    PropertyDescription
    Run configuration nameSpecify the name that you want to assign to this ETL task configuration. The default configuration name is displayed. You can use this name to differentiate between the run configuration settings of ETL tasks.
    Deploy statusSelect the deploy status for the ETL task. For example, you can initially select Test and change it to Production after verifying that the ETL run results are as expected.
    Log levelSpecify the level of details that you want to include in the ETL log file. Select one of the following options:
    • 1 - Light: Select to add the bare minimum activity logs to the log file.
    • 5 - Medium: Select to add the medium-detailed activity logs to the log file.
    • 10 - Verbose: Select to add detailed activity logs to the log file.

    Use log level 5 as a general practice. You can select log level 10 for debugging and troubleshooting purposes.

    Datasets

    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    3. Click Apply.

    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list.

    PropertyDescription
    Metric profile selection

    Select the metric profile that the ETL must use. The ETL collects data for the group of metrics that is defined by the selected metric profile.

    • Use Global metric profile: This is selected by default. All the out-of-the-box ETLs use this profile.
    • Select a custom metric profile: Select the custom profile that you want to use from the Custom metric profile list. This list displays all the custom profiles that you have created.

    For more information about metric profiles, see Adding and managing metric profiles.
    Levels up to

    Specify the metric level that defines the number of metrics that can be imported into the database. The load on the database increases or decreases depending on the selected metric level.

    To learn more about metric levels, see Aging Class mapping.

    To learn more about metric levels, see Aging class mapping Open link .

    PropertyDescription
    Empty dataset behaviorSpecify the action for the loader if it encounters an empty dataset:
    • Warn: Generate a warning about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.
    ETL log file nameThe name of the file that contains the ETL run log. The default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID
    Maximum number of rows for CSV outputA numeric value to limit the size of the output files.
    CSV loader output file nameThe name of the file that is generated by the CSV loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Capacity Optimization loader output file name

    The name of the file that is generated by the TrueSight Capacity Optimization loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID

    Detail mode
    Specify whether you want to collect raw data in addition to the standard data. Select one of the following options:
    • Standard: Data will be stored in the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw also: Data will be stored in the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw only: Data will be stored in the database in a table only at Raw granularity (as available from the original data source).
    For more information, see Accessing data using public views and Sizing and scalability considerations.

    For more information, see Accessing data using public views Open link and Sizing and scalability considerations. Open link

    Remove domain suffix from datasource name (Only for systems) Select True to remove the domain from the data source name. For example, server.domain.com will be saved as server. The default selection is False.
    Leave domain suffix to system name (Only for systems)Select True to keep the domain in the system name. For example: server.domain.com will be saved as is. The default selection is False.
    Update grouping object definition (Only for systems)Select True if you want the ETL to update the grouping object definition for a metric that is loaded by the ETL. The default selection is False.
    Skip entity creation (Only for ETL tasks sharing lookup with other tasks)Select True if you do not want this ETL to create an entity and discard data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share a lookup to create a new entity. The default selection is False.
    PropertyDescription
    Hour maskSpecify a value to run the task only during particular hours within a day. For example, 0 – 23 or 1, 3, 5 – 12.
    Day of week maskSelect the days so that the task can be run only on the selected days of the week. To avoid setting this filter, do not select any option for this field.
    Day of month maskSpecify a value to run the task only on the selected days of a month. For example, 5, 9, 18, 27 – 31.
    Apply mask validationSelect False to temporarily turn off the mask validation without removing any values. The default selection is True.
    Execute after timeSpecify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task is run. The task run begins only after the specified time is elapsed.
    EnqueueableSpecify whether you want to ignore the next run command or run it after the current task. Select one of the following options:
    • False: Ignores the next run command when a particular task is already running. This is the default selection.
    • True: Starts the next run command immediately after the current running task is completed.
    3.Click Save.

    The ETL tasks page shows the details of the newly configured Prometheus ETL:


Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.



  3. In the Run configurations table, click Edit  to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click  in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit  to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit Task . The ETL details are displayed.

  2. On the Edit task page, do the following, and click Save:

    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group and the scheduler to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the database.

Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the k8s Prometheus data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
To verify that the k8s Prometheus data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > k8s Prometheus > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Prometheus instances.
  4. Click a k8s Prometheus entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.


k8s Prometheus Workspace EntityDetails

Entities

TSCO EntitiesPrometheus Entity

Kubernetes Cluster

Cluster

Kubernetes Namespace

Namespace

Kubernetes Node

Node

Kubernetes Pod Workload

An aggregated group of pods running on the same controller; Stand along static pods

Kubernetes ControllerDaemonSet, ReplicaSet, StatefulSet, ReplicationController
Kubernetes Persistent VolumePersistent Volume, Persistent Volume Claim

Entity Relationship

TSCO EntitiesRelationship TypeDescription

Kubernetes Cluster



Kubernetes Namespace

KC_CONTAINS_KNS

Kubernetes Cluster contains Kubernetes Namespace

Kubernetes Node

KC_CONTAINS_KN

Kubernetes Cluster contains Kubernetes Node

Kubernetes Pod Workload

KNS_CONTAINS_PODWK

Kubernetes Namespace contains Pod Workload

Kubernetes ControllerKNS_CONTAINS_KDKubernetes Namespace contains Kubernetes Deployment
Kubernetes Persistent VolumeKC_CONTAINS_KPVKubernetes Cluster contains Kubernetes Persistent Volume

Hierarchy

The connector is able to replicate relationships and logical dependencies among these entities as they are found configured within the Kubernetes cluster.

In particular, the following structure is applied:

  • a Kubernetes Cluster is attached to the root of the hierarchy

  • each Kubernetes Cluster contains its own Nodes, Namespaces and Persistent Volumes

  • each Kubernetes Namespace contains its own Controllers and Stand along pod workloads

  • each Kubernetes Namespace contains persistent volume via persistent volume claim relationship
  • each Kubernetes Controller contains it pod workloads, which is an aggregated entitiy that contains a group of pods that are running on the same controller


Configuration and Performance metrics mapping

The following table lists all the metrics that are imported by the "Moviri Integrations - Kubernetes (Prometheus)" integration. If the user selects not to import POD metrics, only metrics imported at POD level are not imported.

Data Source

Data Source Entity Label

BMC TrueSight Capacity Optimization Optimization Entity

BMC TrueSight Capacity Optimization Metric

Metric Type -  Performance (Perf) or Configuration (Conf) metric
PrometheusClusterKubernetes - ClusterCPU_LIMIT_MAXConf
PrometheusClusterKubernetes - ClusterCPU_REQUETS_MAXConf
PrometheusClusterKubernetes - ClusterMEM_LIMIT_MAXConf
PrometheusClusterKubernetes - ClusterMEM_REQUEST_MAXConf
PrometheusClusterKubernetes - ClusterBYSTATUS_KPOD_NUMPerf
PrometheusClusterKubernetes - ClusterCONTAINER_NUMPerf
PrometheusClusterKubernetes - ClusterCONTROLLER_NUMPerf
PrometheusClusterKubernetes - ClusterCPU_ALLOCATABLEPerf
PrometheusClusterKubernetes - ClusterCPU_LIMITPerf
PrometheusClusterKubernetes - ClusterCPU_NUMConf
PrometheusClusterKubernetes - ClusterCPU_REQUESTPerf
PrometheusClusterKubernetes - ClusterCPU_USED_NUMPerf
PrometheusClusterKubernetes - ClusterCPU_UTILPerf
Prometheus ClusterKubernetes - ClusterJOB_NUMPerf
PrometheusClusterKubernetes - ClusterKPOD_NUMPerf
PrometheusClusterKubernetes - ClusterKPOD_NUM_MAXConf
PrometheusClusterKubernetes - ClusterKUBERNETES_VERSIONConf
PrometheusClusterKubernetes - ClusterMEM_ACTIVEPerf
PrometheusClusterKubernetes - ClusterMEM_KLIMITPerf
PrometheusClusterKubernetes - ClusterMEM_PAGE_MAJOR_FAULT_RATEPerf
PrometheusClusterKubernetes - ClusterMEM_REQUESTPerf
PrometheusClusterKubernetes - ClusterMEM_USEDPerf
PrometheusClusterKubernetes - ClusterMEM_UTILPerf
PrometheusClusterKubernetes - ClusterMEM_REAL_USEDPerf
PrometheusClusterKubernetes - ClusterMEMORY_ALLOCATABLEPerf
PrometheusClusterKubernetes - ClusterSECRET_NUMPerf
PrometheusClusterKubernetes - ClusterSERVICE_NUMPerf
PrometheusClusterKubernetes - ClusterST_ALLOCATEDPerf
PrometheusClusterKubernetes - ClusterTOTAL_FS_FREEPerf
PrometheusClusterKubernetes - ClusterTOTAL_FS_SIZEConf
PrometheusClusterKubernetes - ClusterTOTAL_FS_USEDPerf
PrometheusClusterKubernetes - ClusterTOTAL_FS_UTILPerf
PrometheusClusterKubernetes - ClusterTOTAL_REAL_MEMConf
PrometheusClusterKubernetes - ClusterCPU_REQUEST_ALLOCATABLEPerf
PrometheusClusterKubernetes - ClusterMEM_REQUEST_ALLOCATABLEPerf
PrometheusClusterKubernetes- ClusterST_REQUEST_MAXConf
PrometheusClusterKubernetes - ClusterST_LIMIT_MAXConf
Kubernetes APIClusterKubernetes - ClusterCLUSTER_UUIDConf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerBYIMAGE_CPU_REQUESTPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerBYIMAGE_MEM_REQUESTPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerBYIMAGE_NUMPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerBYSTATUS_KPOD_NUMPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCONTAINER_NUMPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCONTROLLER_TYPEConf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCPU_LIMITPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCPU_REQUESTPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCPU_USED_NUMPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCPU_UTILPerf
Prometheus,
KUBERNETES API
DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCREATION_TIMEConf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerHPA_STATUSConf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerKPOD_NUMPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerKPOD_REPLICA_UPTODATE_NUMPerf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMAX_REPLICASConf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerHPA_CPU_THRESHOLDConf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerHPA_CPU_THRESHOLD_TYPEConf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerHPA_MEM_THRESHOLDConf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerHPA_MEM_THRESHOLD_TYPEConf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMEM_ACTIVEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMEM_KLIMITPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMEM_REQUESTPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMEM_USEDPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMEM_REAL_UTILPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerTOTAL_FS_USEDPerf
KUBERNETES API (if HPA configured)DaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMIN_REPLICASConf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_BIT_RATEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_IN_BIT_RATEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_IN_BYTE_RATEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_IN_ERROR_RATEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_OUT_BIT_RATEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_OUT_BYTE_RATEPerf
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_OUT_ERROR_RATEPerf
Kubernetes APIDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerCONTROLLER_UUIDConf
PrometheusNamespaceKubernetes - NamespaceBYIMAGE_CPU_REQUESTPerf
PrometheusNamespaceKubernetes - NamespaceBYIMAGE_MEM_REQUESTPerf
PrometheusNamespaceKubernetes - NamespaceBYIMAGE_NUMPerf
PrometheusNamespaceKubernetes - NamespaceBYSTATUS_KPOD_NUMPerf
PrometheusNamespaceKubernetes - NamespaceCONTAINER_NUMPerf
PrometheusNamespaceKubernetes - NamespaceCPU_LIMITPerf
PrometheusNamespaceKubernetes - NamespaceCPU_LIMIT_MAXConf
PrometheusNamespaceKubernetes - NamespaceCPU_REQUESTPerf
PrometheusNamespaceKubernetes - NamespaceCPU_REQUEST_MAXConf
PrometheusNamespaceKubernetes - NamespaceCPU_USED_NUMPerf
PrometheusNamespaceKubernetes - NamespaceCPU_UTILPerf
Prometheus,
KUBERNETES API
NamespaceKubernetes - NamespaceCREATION_TIMEConf
PrometheusNamespaceKubernetes - NamespaceKPOD_NUMPerf
PrometheusNamespaceKubernetes - NamespaceKPOD_NUM_MAXConf
PrometheusNamespaceKubernetes - NamespaceMEM_ACTIVEPerf
PrometheusNamespaceKubernetes - NamespaceMEM_KLIMITPerf
PrometheusNamespaceKubernetes - NamespaceMEM_LIMIT_MAXConf
PrometheusNamespaceKubernetes - NamespaceMEM_REQUESTPerf
PrometheusNamespaceKubernetes - NamespaceMEM_REQUEST_MAXConf
PrometheusNamespaceKubernetes - NamespaceMEM_USEDPerf
PrometheusNamespaceKubernetes - NamespaceMEM_REAL_UTILPerf
PrometheusNamespaceKubernetes - NamespaceTOTAL_FS_USEDPerf
PrometheusNamespaceKubernetes - NamespaceNET_BIT_RATEPerf
PrometheusNamespaceKubernetes - NamespaceNET_IN_BIT_RATEPerf
PrometheusNamespaceKubernetes - NamespaceNET_IN_BYTE_RATEPerf
PrometheusNamespaceKubernetes - NamespaceNET_IN_ERROR_RATEPerf
PrometheusNamespaceKubernetes - NamespaceNET_OUT_BIT_RATEPerf
PrometheusNamespaceKubernetes - NamespaceNET_OUT_BYTE_RATEPerf
PrometheusNamespaceKubernetes - NamespaceNET_OUT_ERROR_RATEPerf
PrometheusNamespaceKubernetes - NamespaceST_REQUEST_MAXConf
PrometheusNamespaceKubernetes - NamespaceST_LIMIT_MAXConf
PrometheusNamespaceKubernetes - NamespaceST_ALLOCATEDPerf
PrometheusNamespaceKubernetes - NamespaceCPU_LIMITRANGESConf
PrometheusNamespaceKubernetes - NamespaceMEM_LIMITRANGESConf
Kubernetes APINamespaceKubernetes - NamespaceNAMESPACE_UUIDConf
PrometheusNodeKubernetes - NodeBYSTATUS_KPOD_NUMPerf
PrometheusNodeKubernetes - NodeCONTAINER_NUMPerf
PrometheusNodeKubernetes - NodeCPU_ALLOCATABLEPerf
PrometheusNodeKubernetes - NodeCPU_LIMITPerf
PrometheusNodeKubernetes - NodeCPU_NUMConf
PrometheusNodeKubernetes - NodeCPU_REQUESTPerf
PrometheusNodeKubernetes - NodeCPU_USED_NUMPerf
PrometheusNodeKubernetes - NodeCPU_UTILPerf
Prometheus,
KUBERNETES API
NodeKubernetes - NodeCREATION_TIMEConf
PrometheusNodeKubernetes - NodeKPOD_NUMPerf
PrometheusNodeKubernetes - NodeKPOD_NUM_MAXConf
PrometheusNodeKubernetes - NodeKUBERNETES_VERSIONConf
PrometheusNodeKubernetes - NodeMAINTENANCE_MODEConf
PrometheusNodeKubernetes - NodeMEM_ACTIVEPerf
PrometheusNodeKubernetes - NodeMEM_KLIMITPerf
PrometheusNodeKubernetes - NodeMEM_PAGE_MAJOR_FAULT_RATEPerf
PrometheusNodeKubernetes - NodeMEM_REAL_UTILPerf
PrometheusNodeKubernetes - NodeMEM_REQUESTPerf
PrometheusNodeKubernetes - NodeMEM_USEDPerf
PrometheusNodeKubernetes - NodeMEM_UTILPerf
PrometheusNodeKubernetes - NodeMEMORY_ALLOCATABLEPerf
PrometheusNodeKubernetes - NodeNET_BIT_RATEPerf
PrometheusNodeKubernetes - NodeNET_IN_BIT_RATEPerf
PrometheusNodeKubernetes - NodeNET_IN_BYTE_RATEPerf
PrometheusNodeKubernetes - NodeNET_IN_ERROR_RATEPerf
PrometheusNodeKubernetes - NodeNET_OUT_BIT_RATEPerf
PrometheusNodeKubernetes - NodeNET_OUT_BYTE_RATEPerf
PrometheusNodeKubernetes - NodeNET_OUT_ERROR_RATEPerf
PrometheusNodeKubernetes - NodeOS_TYPEConf
PrometheusNodeKubernetes - NodeTOTAL_FS_FREEPerf
PrometheusNodeKubernetes - NodeTOTAL_FS_SIZEConf
PrometheusNodeKubernetes - NodeTOTAL_FS_USEDPerf
PrometheusNodeKubernetes - NodeTOTAL_FS_UTILPerf
PrometheusNodeKubernetes - NodeTOTAL_REAL_MEMConf
PrometheusNodeKubernetes - NodeLOAD_AVGPerf
PrometheusNodeKubernetes - NodeUPTIMEConf
PrometheusNodeKubernetes - NodeCPU_REQUEST_ALLOCATABLEPerf
PrometheusNodeKubernetes - NodeMEM_REQUEST_ALLOCATABLEPerf
Kubernetes APINodeKubernetes - NodeNODE_UUIDConf
Kubernetes APIPersistent VolumeKubernetes - Persistent VolumeCREATION_TIMEConf
PrometheusPersistent VolumeKubernetes - Persistent VolumeST_ALLOCATEDPerf
Kubernetes APIPersistent VolumeKubernetes - Persistent VolumeST_PATHConf
Kubernetes APIPersistent VolumeKubernetes - Persistent VolumeST_SIZEPerf
Kubernetes APIPersistent VolumeKubernetes - Persistent VolumeST_TYPEConf
Kubernetes APIPersistent VolumeKubernetes - Persistent VolumePV_UUIDConf
PrometheusPersistent VolumeKubernetes - Persistent VolumeST_CONSUMED_CAPACITYPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_CPU_LIMITPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_CPU_REQUESTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_CPU_USED_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_CPU_USED_NUM_HMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_CPU_LIMITPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_CPU_REQUESTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_CPU_USED_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_KPOD_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_MEM_ACTIVEPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_MEM_KLIMITPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_MEM_REQUESTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_MEM_USEDPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_MEM_RSSPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_IMAGE_RESTART_COUNTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_KPOD_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_MEM_ACTIVEPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_MEM_KLIMITPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_MEM_REQUESTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_MEM_USEDPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_MEM_RSSPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_MEM_USED_HMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadBYCONT_RESTART_COUNTPerf
PrometheusPod, Container, ImageKubernetes - Pod WorkloadCONTAINER_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadCPU_LIMITPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadCPU_REQUESTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadCPU_USED_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadKPOD_NUMPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadMEM_ACTIVEPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadMEM_KLIMITPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadMEM_REQUESTPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadMEM_USEDPerf
PrometheusPod, Container, ImageKubernetes - Pod WorkloadMEM_REAL_UTILPerf
PrometheusPod, Container, ImageKubernetes - Pod WorkloadTOTAL_FS_USEDPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadMEM_RSSPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadNONHEAPMEM_MAXPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadHEAPMEM_MAXPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadNONHEAPMEM_USEDPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadHEAPMEM_USEDPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadNONHEAPMEM_COMMITTEEDPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadHEAPMEM_COMMITTEEDPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadNONHEAPMEM_FREEPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadHEAPMEM_FREEPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadHEAPMEM_UTILPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadGC_EVENTSPerf
Prometheus Pod (if JMX-Exporter enabled)Kubernetes - Pod WorkloadGC_TIMEPerf
Prometheus Pod, Container, ImageKubernetes - Pod WorkloadRESTART_COUNTPerf
Kubernetes APIPodKubernetes - Pod WorkloadWORKLOAD_UUIDConf
PrometheusPod, Container, ImageKubernetes - Pod WorkloadNET_HAPROXY_CONNECTIONSPerf
PrometheusPod, Container, ImageKubernetes - Pod WorkloadNET_HAPROXY_BYTES_INPerf
PrometheusPod, Container, ImageKubernetes - Pod WorkloadNET_HAPROXY_BYTES_OUTPerf

For a detailed mapping between Prometheus API queries and each metrics, check out here (exclude for Derived metrics nor Kubernetes API)


Derived Metrics

Data Source Entity

Data Source Metric

BMC TrueSight Capacity Optimization Metric

BMC TrueSight Capacity Optimization Entity

Metric Type -  Performance (Perf) or Configuration (Conf) metric
Kubernetes - ClusterCPU_USED_NUM / CPU_LIMITCPU_UTIL_LIMITKubernetes - ClusterPerf
Kubernetes - ClusterCPU_USED_NUM / CPU_REQUESTCPU_UTIL_REQUESTKubernetes - ClusterPerf
Kubernetes - ClusterMEM_USED / MEM_KLIMITMEM_UTIL_LIMITKubernetes - ClusterPerf
Kubernetes - ClusterMEM_USED / MEM_REQUESTMEM_UTIL_REQUESTKubernetes – ClusterPerf
Kubernetes - ClusterMEM_KLIMIT / MEM_LIMIT_MAXMEM_LIMIT_QUOTAKubernetes - ClusterPerf
Kubernetes - ClusterMEM_REQUEST / MEM_REQUEST_MAXMEM_REQUEST_QUOTAKubernetes - ClusterPerf
Kubernetes - ClusterCPU_LIMIT / CPU_LIMIT_MAXCPU_LIMIT_QUOTAKubernetes - ClusterPerf
Kubernetes - ClusterCPU_REQUEST / CPU_REQUEST_MAXCPU_REQUEST_QUOTAKubernetes - ClusterPerf
Kubernetes - ClusterST_SIZEST_SIZEKubernetes - ClusterPerf
Kubernetes - NodeMEM_ACTIVE / TOTAL_REAL_MEMMEM_REAL_UTILKubernetes - NodePerf
Kubernetes - ControllerCPU_USED_NUM / CPU_LIMITCPU_UTIL_LIMITKubernetes - ControllerPerf
Kubernetes - ControllerCPU_USED_NUM / CPU_REQUESTCPU_UTIL_REQUESTKubernetes - ControllerPerf
Kubernetes - ControllerMEM_USED / MEM_KLIMITMEM_UTIL_LIMITKubernetes - ControllerPerf
Kubernetes - ControllerMEM_USED / MEM_REQUESTMEM_UTIL_REQUESTKubernetes - ControllerPerf
Kubernetes - NamespaceCPU_USED_NUM / CPU_LIMITCPU_UTIL_LIMITKubernetes - NamespacePerf
Kubernetes - NamespaceCPU_USED_NUM / CPU_REQUESTCPU_UTIL_REQUESTKubernetes - NamespacePerf
Kubernetes – NamespaceMEM_USED / MEM_KLIMITMEM_UTIL_LIMITKubernetes - NamespacePerf
Kubernetes - NamespaceMEM_USED / MEM_REQUESTMEM_UTIL_REQUESTKubernetes - NamespacePerf
Kubernetes - NamespaceMEM_KLIMIT / MEM_LIMIT_MAXMEM_LIMIT_QUOTAKubernetes - NamespacePerf
Kubernetes - NamespaceMEM_REQUEST / MEM_REQUEST_MAXMEM_REQUEST_QUOTAKubernetes - NamespacePerf
Kubernetes - NamespaceCPU_LIMIT / CPU_LIMIT_MAXCPU_LIMIT_QUOTAKubernetes - NamespacePerf
Kubernetes - NamespaceCPU_REQUEST / CPU_REQUEST_MAXCPU_REQUEST_QUOTAKubernetes - NamespacePerf
Kubernetes - NodeCPU_USED_NUM / CPU_LIMITCPU_UTIL_LIMITKubernetes - NodePerf
Kubernetes - NodeCPU_USED_NUM / CPU_REQUESTCPU_UTIL_REQUESTKubernetes - NodePerf
Kubernetes – NodeMEM_USED / MEM_KLIMITMEM_UTIL_LIMITKubernetes - NodePerf
Kubernetes - NodeMEM_USED / MEM_REQUESTMEM_UTIL_REQUESTKubernetes - NodePerf
Kubernetes - NodeTAG "node_role_kubernetes_io"SERVER_TYPEKubernetes - NodeConf
Kubernetes - NodeMEM_ACTIVE / TOTAL_REAL_MEMMEM_REAL_UTILKubernetes - NodePerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_CPU_USED_NUM / BYCONT_IMAGE_CPU_LIMITBYCONT_IMAGE_CPU_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_CPU_USED_NUM / BYCONT_IMAGE_CPU_REQUESTBYCONT_IMAGE_CPU_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_CPU_USED_NUM / BYCONT_IMAGE_CPU_LIMITBYCONT_IMAGE_CPU_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_CPU_USED_NUM / BYCONT_IMAGE_CPU_REQUESTBYCONT_IMAGE_CPU_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_MEM_USED / BYCONT_IMAGE_MEM_KLIMITBYCONT_IMAGE_MEM_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_MEM_USED / BYCONT_IMAGE_MEM_REQUESTBYCONT_IMAGE_MEM_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_MEM_USED / BYCONT_IMAGE_MEM_KLIMITBYCONT_IMAGE_MEM_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_IMAGE_MEM_USED / BYCONT_IMAGE_MEM_REQUESTBYCONT_IMAGE_MEM_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_CPU_USED_NUM / BYCONT_CPU_LIMITBYCONT_CPU_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_CPU_USED_NUM / BYCONT_CPU_REQUESTBYCONT_CPU_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_CPU_USED_NUM / BYCONT_CPU_LIMITBYCONT_CPU_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_CPU_USED_NUM / BYCONT_CPU_REQUESTBYCONT_CPU_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_MEM_USED / BYCONT_IMAGE_MEM_KLIMITBYCONT_MEM_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_MEM_USED / BYCONT_MEM_REQUESTBYCONT_MEM_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_MEM_USED / BYCONT_MEM_KLIMITBYCONT_MEM_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadBYCONT_MEM_USED / BYCONT_MEM_REQUESTBYCONT_MEM_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadCPU_USED_NUM / CPU_LIMITCPU_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadCPU_USED_NUM / CPU_REQUESTCPU_UTIL_REQUESTKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadMEM_USED / MEM_KLIMITMEM_UTIL_LIMITKubernetes - Pod WorkloadPerf
Kubernetes - Pod WorkloadMEM_USED / MEM_REQUESTMEM_UTIL_REQUESTKubernetes - Pod WorkloadPerf

Lookup Field Considerations

Entity Type

Strong Lookup Field

Others

Kubernetes - Cluster

KUBE_CLUSTER&&KUBE_TYPE


Kubernetes - Namespace

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME


Kuberneteds - Node

KUBE_CLUSTER&&KUBE_TYPE&&HOSTNAME&&NAME

_COMPATIBILITY_

Kubernetes - Controller

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME&&KUBE_DP_NAME


Kubernetes - Pod Workload

KUBE_CLUSTER&&KUBE_NS_NAME&&KUBE_DP_NAME&&KUBE_DP_TYPE&&KUBE_WL_NAME&&KUBE_TYPE


Kubernetes - Persistent Volume

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_PV_NAME


 

Tag Mapping (Optional)

The “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” supports Kubernetes Labels as Tags inTrueSight Capacity Optimization.
The keys in labels are imported as Tag types, and the values in labels are imported as Tag values. The integrator will replace special word delimiter appears in Kubernetes label key as underscore (_) in Tag type.
In particular, the entities we import labels are:

Data Source Entity

Data Source


BMC TrueSight Capacity Optimization Entity

Prometheus

DAEMONSET

Kubernetes - Controller

Prometheus

STATEFULSET

Kubernetes - Controller

Kubernetes API

REPLICATSET

Kubernetes - Controller

Kubernetes API

REPLICATIONCONTROLLER

Kubernetes - Controller

Prometheus

POD WORKLOAD

Kubernetes - Pod Workload

Prometheus

NODE

Kubernetes - Node

Prometheus

NAMESPACE

Kubernetes - Namespace

Prometheus

PERSISTENT VOLUME

Kubernetes - Persistent Volume

Here’s a snapshot for what tag looks like:



k8s Heapster to k8s Prometheus Migration

The “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” supports a seamless transition from entities and metrics imported by the “Moviri Integrator forTrueSight Capacity Optimization – k8s Heapster”. Please follow these steps to migrate between the two integrators:

  1. Stop “Moviri Integrator for TrueSight Capacity Optimization – k8s Heapster” ETL task.
  2. Install and configure the “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus”, ensuring that the lookup is shared with the “Moviri Integrator for TrueSight Capacity Optimization – k8s Heapster” ETL task.
  3. Start “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” ETL task.


Pod Optimization - Pod Workloads replace Pods


The “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” introduce a new entity "pod workload" from v20.02.01. Pod Workload is an aggregated entity that aggregate a group of pods that are running on the same controller. pod workload is the direct child of the controller that the pods are running on. Pod workload will use the same name as the parent controller. Pods at the same time will be dropped. 


If you used the  “Moviri Integrator for  TrueSight Capacity Optimization– k8s Prometheus” and imported pods before, once you upgrade to this version, all the Kubernetes pods will be gone and left in All systems and business drivers ->

"Unassigned".

Pod Workload
EntityPod workload is an aggregated entity, that represents all the pods that are running on the same controller. Pod workload will use the same name as the controller that all the pods are running on.
HierarchyPod Workload is direct child of the controller that the aggregated pods are running on. For stand along pods, the pod workload is exactly the same as stand along pods themselves, and is children of the namespace.
Metrics Meaning

Global Metrics: provides multiple statistics values (avg, max, min, sum) for the last hour, representing max, min, sum and average value of the average of all pods running on that controller.

BYCONT metrics: provdes multiple statistics value (avg, max, min, sum) for the last hour, representing max, min, sum and average value of the average of per container per image running on that controller.

BYCONT Highmar Counters: BYCONT_CPU_USED_NUM_HM and BYCONT_MEM_USED_HM provide multiple statistics value on highmark counters based on 1 minute resolution. Max value is 95th percentile of the max container's the 1m resolution data; Avg value is the 90th percentile of the max container's the 1m resolution data; MIn value is the 75th percentile of the max container's the max container's 1m resolution data.; SUM value is the 95th percentile max 1 min resolution data summing all containers on that controller.

BYCONT_IMAGE metrics: provides multiple statistics value (avg, max, min, sum) for the last hour, representing max, min, sum and average of average value of per container per image running on that controller. Container name and image name as the subentity name, using "::" connecting (eg. controllername::imagename)

Here are some screenshots of the hierarchy and metric

Common Issues



Error Messages / BehaviorsCauseSolve
Query erros HTTP 422 Unprocessable Entity

You will see these errors shows sometimes. The number of this error messages can vary a lot for each run.

This is usually caused by Prometheus rebuilding or restarting. Right after the Prometheus's rebuilding or reloading, there are couple of days you will see this error showing. They usually goes away organically as the Prometheus running more stable.

They usually goes away organically as the Prometheus running more stable.
Prometheus is running fine but no data is pulledThis usually caused by the last counter is set too far from today's date. Prometheus has a data retention periods which has a default value 15 days, and it can be configured. If the ETL is set to extracting data passed the data retention period, there's not gonna be any data. Prometheus's status page will show the data retention value in "storage retention" field.Modify the default last counter to a more recent date.
504 Gateway Timeouts

These 504 Timeout query error (server didn't respondd in time) is related to the route timeout is being used on Openshift. This can be configured on a route-to-route basis. For example, the Prometheus route can be increased to the 2min timeout that is also configured on the Prometheus backend. Please follow this link to understand what is the configured timeout and how can it be increased
https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html

Increase timeout period from Openshift side

Data Verification

The following sections provide some indications on how to verify on Prometheus if all the pre-requisites are in place before starting collecting data

Verify Prometheus Build information

Verify the Prometheus "Last successful configuration reload" (from Prometheus UI, check "Status > Runtime & Build Information")
If the "Last successful configuration reload" is reporting less then 3 days, ask the customer to evaluate the status of the integration in the next 2/3 days

Verify the Status of Prometheus Target Services 

Verify the status of Prometheus Target (from the Prometheus UI, check "Status > Targets"

  • Check the status of "node-exporter" (there should be 1 instance running for each node in the cluster)
  • Check the status of "kube-state-metrics" (there should be at least 1 instance running)
  • Check the status of "kubelet" (there should be at least 1 instance running for each node in the cluster)

Verify data availability in Prometheus Tables

Verify if the following Prometheus tables contain data (from the Prometheus Ul)

  • "kube_pod_container_info" when missing Pod Workload, Controller, Namespace (but also Cluster and Node for Requests and Limits metrics)
  • "kube_node info" when missing Node and Cluster metrics.


Was this page helpful? Yes No Submitting... Thank you

Comments

  1. Marc Ewinger

    Only seeing an error message here - "{include} could not be rendered. The included page could not be found."

    Jan 09, 2023 03:14
    1. Bipin Inamdar

      Hi Marc,

      Looks like some temporary wiki issue. Please check now.

      Jan 09, 2023 03:23
  2. Javier Menendez

    Hi, Could be possible to add the Entity relationship section like https://docs.bmc.com/docs/capacityoptimization/btco2002/metrics-collected-by-vmware-etls-914178175.html ?

    Mar 21, 2023 05:04
    1. Bipin Inamdar

      Hi Javier,

      As per your suggestion, the entity relationship section has been added.

      Apr 13, 2023 04:38
  3. Ajay Ranbhise

    As per case-01616928 Moviri has confirmed below for Openshift 4.12 version support The latest releases of “Moviri Integration for Kubernetes - Prometheus” supports OpenShift 4.12 (from ETL version v.23.007), both for TSCO and BHCO.

    Jul 10, 2023 11:05
    1. Bipin Inamdar

      Hi Ajay,

      Thanks for the feedback. The version has been updated.

      Jul 11, 2023 03:32