Moviri Integrator for TrueSight Capacity Optimization – k8s (Kubernetes) Prometheus

“Moviri Integrator for TrueSight Capacity Optimization – k8s (Kubernetes) Prometheus” is an additional component of BMC TrueSight Capacity Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments.  Relevant capacity metrics are loaded into BMC TrueSight Capacity Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.

The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore, the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.

The documentation is targeted at BMC TrueSight Capacity Optimization administrators, in charge of configuring and monitoring the integration between BMC TrueSight Capacity Optimization and Kubernetes.

Importing data from Kubernetes controllers (Deployment, ReplicationController, ReplicaSet, StatefulSet, and DaemonSet) and Google Kubernetes Engine is supported only when you apply Service Pack 1 (11.5.01) of TrueSight Capacity Optimization 11.5.


Step I. Complete the preconfiguration tasks

Step II. Configure the ETL

Step III. Run the ETL

Step IV. Verify the data collection

k8s Heapster to k8s Prometheus Migration

Step I. Complete the preconfiguration tasks

StepsDetails
Check the required API version is supported
 versions

 Kubernetes versions 1.5 to 1.12

 Rancher 1.6 (when managing a kubernetes cluster version 1.5 to 1.7)

 Openshift 3.11 to 3.12

 GKE

 Prometheus 2.7 and onward

Check supported versions of data source configuration
 data source version

The integrator requires the Prometheus monitoring component to be correctly monitoring the various entities supported by the integration via the kube-state-metrics service.

The connector has also the option to access to the Kubernetes API. Access to Kubernetes API is not mandatory to configure the ETL, but it is strongly suggested. Without the access to Kubernetes API, the integrator will not be able to import the following information

  • Persistent volumes capacity-relevant metrics

  • Kubernetes Labels for replicaset and replicasetcontroller

Generate access to Prometheus API

The access to Prometheus API depends on the Kubernetes distribution and the Prometheus configuration. The following sections describe the standard procedure for the supported platforms.

 Generate Credentials

OpenShift

OpenShift Container Platform Monitoring ships with a Prometheus instance for cluster monitoring and a central Alertmanager cluster. You can get the addresses for accessing Prometheus, Alertmanager, and Grafana web UIs by running:

$ oc -n openshift-monitoring get routes


NAME                HOST/PORT                                          ...
alertmanager-main   alertmanager-main-openshift-monitoring.apps.url.openshift.com ...

grafana             grafana-openshift-monitoring.apps.url.openshift.com           ...

prometheus-k8s      prometheus-k8s-openshift-monitoring.apps.url.openshift.com    ...

Or you can access from Openshift Interface under "Networking" → "Routes" tab or "Networking" → "Services" tab from the cluster console.

Make sure to append https:// to these addresses. You cannot access web UIs using unencrypted connection.

Authentication is performed against the OpenShift Container Platform identity and uses the same credentials or means of authentication as is used elsewhere in OpenShift Container Platform. You need to use a role that has read access to all namespaces, such as the cluster-monitoring-view cluster role. 

Rancher

When installing Prometheus from the Catalog Apps, the default configuration sets up a Layer 7 ingress using xip.io. From the Load Balancing tab, you can see the endpoint to access Prometheus.

Google Kubernetes Engine

When installing Prometheus from the Google Cloud using ingress, the Prometheus endpoint will be the type of load balancer or ingress. Access the endpoint on "Service & Ingress" type, and you can see the endpoint to access Prometheus.


Kubectl

Kubernetes commandline tool kubectl can also be used to access Prometheus API. using the following command, and replace the namespace with the namespace where Prometheus pod is running, and the external IP address will show the url of Prometheus

kubectl get service -n <namespace>


Other Distributions

Prometheus does not directly support authentication for connections to the Prometheus expression browser and HTTP API. If you would like to enforce basic authentication for those connections, Prometheus documentation recommends using Prometheus in conjunction with a reverse proxy and applying authentication at the proxy layer. Please refer to the official Prometheus documentation for configuring a NGINX reverse proxy with basic authentication.

Verify Prometheus API access
 Verify Access

To verify, execute the following command from one of the TrueSight Capacity Optimization servers:

When authentication is not required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info

When basic authentication is required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info -u <username>

When bear token authentication is required

curl -k https://<prometheus_url>:<port>/api/v1/query?query=kube_node_info -H “Authorization: Bearer <token>”
Generate Access to Kubernetes API (Optional)

Access to Kubernetes API is optional, but highly recommended. Follow the following steps for Kubernetes API access on Openshift and GKE:

 Kubernetes API

To access the Kubernetes API, the Kubernetes connector uses a Service Account. The authentication will be performed using the service account token. Additionally, in order to prevent accidental changes, the integrator service account will be granted read-only privileges and will be allowed to query a set of specific API endpoints. Here follows an example procedure to create the service account in a Kubernetes cluster using the kubectl CLI.

Create a Service Account

First of all, create the service account to be used by the Kubernetes connector:

$ kubectl create serviceaccount tsco

Then, describe the service account to discover which secret is associated to it:

$ kubectl describe serviceaccount tsco


Name: tsco
Namespace: default

Labels: <none>

Annotations: <none>

Image pull secrets: <none>

Mountable secrets: tsco-token-6x9vs

Tokens: tsco-token-6x9vs


Now, describe the secret to get the corresponding token:

kubectl describe secret tsco-token-6x9vs


Name: tsco-token-6x9vs

Namespace: default

Labels: <none>

Annotations: kubernetes.io/service-account.name=tsco

kubernetes.io/service-account.uid=07bca5e7-7c3e-11e7-87bc-42010a8e0002

Type: kubernetes.io/service-account-token

Data

====

ca.crt: 1025 bytes

namespace: 7 bytes

token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRzY28tdG9rZW4tNng5dnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHNjbyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3YmNhNWU3LTdjM2UtMTFlNy04N2JjLTQyMDEwYThlMDAwMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnRzY28ifQ.tA6c9AsVJ0QKD0s-g9JoBdWDfhBvClJDiZGqDW6doS0rNZ5-dwXCJTss97PXfCxd5G8Q_nxg-elB8rV805K-j8ogf5Ykr-JLsAbl9OORRzcUCShYVF1r7O_-ikGg7abtIPh_mE5eAgHkJ1P6ODvaZG_0l1fak4BxZMTVfzzelvHpVlLpJZObd7eZOEtEEEkcAhZ2ajLQoxLucReG2A25_SrVJ-6c82BWBKQHcTBL9J2d0iHBHv-zjJzXHQ07F62vpc3Q6QI_rOvaJgaK2pMJYdQymFff8OfVMDQhp9LkOkxBPuJPmNHmHJxSvCcvpNtVMz-Hd495vruZFjtwYYygBQ


The token data ("eyJhb ... YygBQ") will be used by the Kubernetes integrator to authenticate against the API. Save the token as it will be required at the connector creation time.

Grant the Service Account Read-only Privileges

The following section outlines an example configuration on the Kubernetes cluster that is suggested in order to allow API access to the service account used by the integrator. We provide example configurations for the two most common authorization schemes used in Kubernetes clusters, namely RBAC (Role-Based Access Control) and ABAC (Attribute-Based Access Control). To identify which mode is configured in your Kubernetes cluster, please refer to the official project documentation: https://kubernetes.io/docs/admin/authorization/

RBAC Authorization

RBAC is the authorization mode enabled by default from Kubernetes 1.6 onward. To grant read-only privileges to the connector service account, a new cluster role is created. The new cluster role allows to grant specific read-only privileges to a set of API operations and entities to the connector.

kubectl create -f tsco-cluster-role.yml
kubectl create clusterrolebinding tsco-view-binding --clusterrole=tsco-cluster-role --serviceaccount=default:tsco

Here is an example policy file that can be used for this purpose:

$ cat tsco-cluster-role.yml


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

 namespace: default

 name: tsco-cluster-role

rules:

- apiGroups: [""]

 resources: ["pods", "nodes", "namespaces", "replicationcontrollers", "persistentvolumes", "resourcequotas", "limitranges", "persistentvolumeclaims"]

 verbs: ["get", "list"]

- apiGroups: ["extensions"]

 resources: ["deployments", "replicasets"]

 verbs: ["get", "list"]


Now, create the cluster role and associate it to the connector service account:

ABAC Authorization

ABAC authorization grants access rights to users and service accounts via policies that are configured in a policy file. Such file is then used by the Kubernetes API server via the startup parameter --authorization-policy-file.

In order to allow read-only access to the integrator service account, the following policy line need to be appended to the aforementioned policy file:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:default:tsco", "namespace": "*", "resource": "*", "apiGroup": "*", "readonly": true, "nonResourcePath": "*"}}

The apiserver will need to be restarted to pick up the new policy lines.

Verify Kubernetes API access

To verify, execute the curl Linux command from one of the TrueSight Capacity Optimization servers:

 Verify Access
$ curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k -H "Authorization: Bearer <KUBERNETES_CONNECTOR_TOKEN>"

You should get a JSON document describing the nodes comprising the Kubernetes cluster.

{
 "kind": "NodeList",
 "apiVersion": "v1",
 "metadata": {
 "selfLink": "/api/v1/nodes",
 "resourceVersion": "37317619"
 },
 "items": [
 {
 "metadata": {

Step II. Configure the ETL

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection

  3. On the Run Configuration tab, select Moviri - k8s Prometheus Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.


  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog:

      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
  5. Click the Connection tab, and configure the following properties:

Property Name

Value Type

Required?

Default

Description

Prometheus – API URL

String

Yes


Prometheus API URL (http/https://hostname:port).

Port can be emitted.

Prometheus – API Version

String

Yes

v1

Prometheus API version, this should be the same as the Kubernetes API version if using any.

Prometheus – API Authentication Method

String

Yes


Prometheus API authentication method. There are three methods that are supported: Authentication Token (Bearer), Basic Authentication (username/password) and None (no authentication).

Prometheus – Username

String

No


Prometheus API username if the Authentication method is set to Basic Authentication.

Prometheus – Password

String

No


Prometheus API password if the Authentication method is set to Basic Authentication.

Prometheus – API Authentication Token

String

No


Prometheus API Authentication Token (Bearer Token) if the Authentication method is set to Authentication Token.

Prometheus – Use Proxy Server

Boolean

No


If a proxy server is used when chose either Basic Authentication or None.

Proxy sever supports HTTP.

Proxy server only support Basic Authentication and None Authentication.

Prometheus - Proxy Server Host

String

No


Proxy server host name.

Prometheus - Proxy Server Port

Number

No


Proxy server port. Default 8080.

Prometheus - Proxy Username

String

No


Proxy server username

Prometheus - Proxy Password

String

No


Proxy server password

Use Kubernetes API

Boolean

Yes


If use Kubernetes API or not.

Kubernetes Host         

String

Yes

 

Kubernetes API server host name

For Openshift, use the Openshift console FQDN (e.g., console.ose.bmc.com).

Kubernetes API Port

Number

Yes

 

Kubernetes API server port

For Openshift, use the same port as the console (typically 8443).

Kubernetes API Protocol

String

Yes

 HTTPS

Kubernetes API protocol, "HTTPS" in most cases

Kubernetes Authentication Token

String

Yes

 

Token of the integrator service account (see data source configuration section).

6. Click the Prometheus Extraction tab, and configure the following properties:

Property Name

Value Type

Required

Default

Description

Data Resolution

String

Yes


Data resolution for data to be pulled from Prometheus into TSCO. Default is set to 5 minutes. any value less than 5 minutes will be set to default 5 minutes.

Cluster NameStringNo
If Kubernetes API is not in use, cluster name must be specified.

Default Last Counter

String

Yes


Default earliest time the connector should be pulling data from in UTC. Format as YYYY-MM-DDTHH24:MI:SS.SSSZ, for example, 2019-01-01T19:00:00.000Z.

Maximum Hours to Extract

String

No

24

Maximum hours the connector can pull data from. If leave empty, using default 24 hours from default last counter.

Extract POD metrics

Boolean

No

Yes

If Extract POD metrics.

Default as “YES”, extract all POD metrics.

If "No" is selected, the integration will not import metrics at POD level. All the aggregations at other levels (Cluster, Namespace, Deployment, Node) will still be computed

Select only PODs on the following nodes

String 

 No

 

Extracts information only for the pods that are currently running on the specified nodes.

Multiple nodes name can be separated by semicolon (;).

Select only PODs on the following namespaces

String 

 No

 

Extracts information only for the pods that are currently running in the specified namespaces.

Multiple namespaces name can be separated by semicolon (;).

Select only PODs on the following controllers

String 

 No

 

Extracts information only for the pods that are currently running in the specified deployment and namespace tuples.

Namespace and controller tuples format as namespace following colon (:) and deployment, for example: namespace-01:api-deployment will select pods under api-deployment only on namespace-01. If namespace is not specified, all pods under this controller in all the namespaces will be selected. If namespace is emitted, start with colon (:) and follows deployment name. For example, :api-deployment will select pods under api-deployment on all namespaces in this cluster.

Multiple namespace controller tuple can be separated by semicolon (;).

Select only PODs with the following tags

String

No


Extracts information only for the pods that are currently running with the specified tag type (label key) namespace tuples.

Namespace and tag type tuples format as namespace following colon (:) and controller , for example: namespace-01:app will select pods with app as tag type only on namespace-01. If namespace is not specified, all pods with this tag type in all the namespaces will be selected. If namespace is emitted, start with colon (:) and follows tag type. For example, :app will select pods with tag type app on all namespaces in this cluster.

Multiple namespace label tuple can be separated by semicolon (;).

Exclude PODs on the following nodes

String 

 No

 

Does not extract information for the pods that are currently running on the specified nodes.

Multiple deployments name can be separated by semicolon (;).

Exclude PODs on the following namespaces

String 

 No

 

Does not extract information for the pods that are currently running in the specified namespaces.

Multiple deployments name can be separated by semicolon (;).

Exclude PODs on the following controllers

String 

 No

 

Does not extract information for the pods that are currently running in the specified deployment and namespace tuple.

Namespace and controller tuples format as namespace following colon (:) and controller , for example: namespace-01:api-deployment will exclude pods under api-deployment only on namespace-01. If namespace is not specified, all pods under this controller in all the namespaces will be excluded. If namespace is emitted, start with colon (:) and follows controller name. For example, :api-deployment will exclude pods under api-deployment on all namespaces in this cluster.

Multiple namespace controller tuple can be separated by semicolon (;).

Exclude PODs with the following tags

String

No


Does not extract information only for the pods that are currently running with the specified tag type (label key) namespace tuples.

Namespace and tag type tuples format as namespace following colon (:) and deployment, for example: namespace-01:app will exclude pods with app as tag type only on namespace-01. If namespace is not specified, all pods with this tag type in all the namespaces will be excluded. If namespace is emitted, start with colon (:) and follows tag type. For example, :app will exclude pods with tag type app on all namespaces in this cluster.

Multiple namespace label tuple can be separated by semicolon (;).

Import only the following tags (semi-colon separated list)

String

No


Import only tag types (Kubernetes Label key).

Specify the keys of the labels. They can be in the original format appears on Kubernetes API, or they can be using underscore (_) as delimiter. For example, node_role_kubernetes_io_compute and node-role.kubernetes.io/compute are equivalent and will be imported as node_role_kubernetes_io_compute.
Multiple tag types can be separated by semicolon (;).

The following image shows a Run Configuration example for the “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus":


7. (Optional) Override the default values of the properties:


 Run configuration
PropertyDescription
Module selection

Select one of the following options:

  • Based on datasource: This is the default selection.
  • Based on Open ETL template: Select only if you want to collect data that is not supported by TrueSight Capacity Optimization.
Module descriptionA short description of the ETL module.
Execute in simulation modeBy default, the ETL execution in simulation mode is selected to validate connectivity with the data source, and to ensure that the ETL does not have any configuration issues. In the simulation mode, the ETL does not load data into the database. This option is useful when you want to test a new ETL task. To run the ETL in the production mode, select No.
BMC recommends that you run the ETL in the simulation mode after ETL configuration and then run it in the production mode.
 Object relationships
PropertyDescription
Associate new entities to

Specify the domain to which you want to add the entities created by the ETL.

Select one of the following options:

  • Existing domain: This option is selected by default. Select an existing domain from the Domain list. If the selected domain is already used by other hierarchy rules, select one of the following Domain conflict options:
    • Enrich domain tree: Select to create a new independent hierarchy rule for adding a new set of entities, relations, or both that are not defined by other ETLs.
    • ETL Migration: Select if the new ETL uses the same set of entities, relations, or both that are already defined by other ETLs.
  • New domain: Select a parent domain, and specify a name for your new domain.

By default, a new domain with the same ETL name is created for each ETL. When the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created in the active state. If you specify a different domain for the ETL, the hierarchy rule is updated automatically.

 ETL task properties
PropertyDescription
Task groupSelect a task group to classify the ETL.
Running on schedulerSelect one of the following schedulers for running the ETL:
  • Primary Scheduler: Runs on the Application Server.
  • Generic Scheduler: Runs on a separate computer.
  • Remote: Runs on remote computers.
Maximum execution time before warningIndicates the number of hours, minutes, or days for which the ETL must run before generating warnings or alerts, if any.
Frequency

Select one of the following frequencies to run the ETL:

  • Predefined: This is the default selection. Select a daily, weekly, or monthly frequency, and then select a time to start the ETL run accordingly.
  • Custom: Specify a custom frequency, select an appropriate unit of time, and then specify a day and a time to start the ETL run.

(Optional) B. Configuring the advanced properties


You can configure the advanced properties to change the way the ETL works or to collect additional metrics.


To configure the advanced properties:


  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

     Run configuration
    PropertyDescription
    Run configuration nameSpecify the name that you want to assign to this ETL task configuration. The default configuration name is displayed. You can use this name to differentiate between the run configuration settings of ETL tasks.
    Deploy statusSelect the deploy status for the ETL task. For example, you can initially select Test and change it to Production after verifying that the ETL run results are as expected.
    Log levelSpecify the level of details that you want to include in the ETL log file. Select one of the following options:
    • 1 - Light: Select to add the bare minimum activity logs to the log file.
    • 5 - Medium: Select to add the medium-detailed activity logs to the log file.
    • 10 - Verbose: Select to add detailed activity logs to the log file.

    Use log level 5 as a general practice. You can select log level 10 for debugging and troubleshooting purposes.

    Datasets

    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    3. Click Apply.

    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list.

     Collection level
    PropertyDescription
    Metric profile selection

    Select the metric profile that the ETL must use. The ETL collects data for the group of metrics that is defined by the selected metric profile.

    • Use Global metric profile: This is selected by default. All the out-of-the-box ETLs use this profile.
    • Select a custom metric profile: Select the custom profile that you want to use from the Custom metric profile list. This list displays all the custom profiles that you have created.

    For more information about metric profiles, see Adding and managing metric profiles.
    Levels up to

    Specify the metric level that defines the number of metrics that can be imported into the database. The load on the database increases or decreases depending on the selected metric level.

    To learn more about metric levels, see Aging Class mapping.

    To learn more about metric levels, see Aging class mapping .

     Loader configuration
    PropertyDescription
    Empty dataset behaviorSpecify the action for the loader if it encounters an empty dataset:
    • Warn: Generate a warning about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.
    ETL log file nameThe name of the file that contains the ETL run log. The default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID
    Maximum number of rows for CSV outputA numeric value to limit the size of the output files.
    CSV loader output file nameThe name of the file that is generated by the CSV loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Capacity Optimization loader output file nameThe name of the file that is generated by the TrueSight Capacity Optimization loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Detail mode
    Specify whether you want to collect raw data in addition to the standard data. Select one of the following options:
    • Standard: Data will be stored in the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw also: Data will be stored in the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw only: Data will be stored in the database in a table only at Raw granularity (as available from the original data source).
    For more information, see Accessing data using public views and Sizing and scalability considerations.

    For more information, see Accessing data using public views and Sizing and scalability considerations.

    Remove domain suffix from datasource name (Only for systems) Select True to remove the domain from the data source name. For example, server.domain.com will be saved as server. The default selection is False.
    Leave domain suffix to system name (Only for systems)Select True to keep the domain in the system name. For example: server.domain.com will be saved as is. The default selection is False.
    Update grouping object definition (Only for systems)Select True if you want the ETL to update the grouping object definition for a metric that is loaded by the ETL. The default selection is False.
    Skip entity creation (Only for ETL tasks sharing lookup with other tasks)Select True if you do not want this ETL to create an entity and discard data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share a lookup to create a new entity. The default selection is False.
     Scheduling options
    PropertyDescription
    Hour maskSpecify a value to run the task only during particular hours within a day. For example, 0 – 23 or 1, 3, 5 – 12.
    Day of week maskSelect the days so that the task can be run only on the selected days of the week. To avoid setting this filter, do not select any option for this field.
    Day of month maskSpecify a value to run the task only on the selected days of a month. For example, 5, 9, 18, 27 – 31.
    Apply mask validationSelect False to temporarily turn off the mask validation without removing any values. The default selection is True.
    Execute after timeSpecify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task is run. The task run begins only after the specified time is elapsed.
    EnqueueableSpecify whether you want to ignore the next run command or run it after the current task. Select one of the following options:
    • False: Ignores the next run command when a particular task is already running. This is the default selection.
    • True: Starts the next run command immediately after the current running task is completed.
    3.Click Save.

    The ETL tasks page shows the details of the newly configured Prometheus ETL:


Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.



  3. In the Run configurations table, click Edit  to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click  in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit  to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit Task . The ETL details are displayed.

  2. On the Edit task page, do the following, and click Save:

    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group and the scheduler to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the database.

Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the k8s Prometheus data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
To verify that the k8s Prometheus data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > k8s Prometheus > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Prometheus instances.
  4. Click a k8s Prometheus entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.


k8s Prometheus Workspace EntityDetails

Entities

 Show Entities
TSCO EntitiesPrometheus Entity

Kubernetes Cluster

Cluster

Kubernetes Namespace

Namespace

Kubernetes Node

Node

Kubernetes Pod

Pod

Kubernetes ControllerDaemonSet, ReplicaSet, StatefulSet, ReplicationController
Kubernetes Consistent VolumeConsistent Volume, Consistent Volume Claim

Hierarchy

 Show Hierarchy

The connector is able to replicate relationships and logical dependencies among these entities as they are found configured within the Kubernetes cluster.

In particular, the following structure is applied:

  • a Kubernetes Cluster is attached to the root of the hierarchy

  • each Kubernetes Cluster contains its own Nodes, Namespaces and Persistent Volumes

  • each Kubernetes Namespace contains its own Controllers and (standalone) Pods


Configuration and Performance metrics mapping

 Metrics Mapping

The following table lists all the metrics that are imported by the "Moviri Integrations - Kubernetes (Prometheus)" integration. If the user selects not to import POD metrics, only metrics imported at POD level are not imported.

Data Source

Data Source Entity Label

BMC TrueSight Capacity Optimization Entity

BMC TrueSight Capacity Optimization Metric

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

BYIMAGE_CPU_REQUEST

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

BYIMAGE_MEM_REQUEST

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CREATION_TIME

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CONTROLLER_TYPE

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

KPOD_REPLICA_UPTODATE_NUM

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

BYIMAGE_NUM

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CPU_UTIL

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

KPOD_NUM

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

BYSTATUS_KPOD_NUM

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CONTAINER_NUM

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CPU_LIMIT

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CPU_REQUEST

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

CPU_USED_NUM

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

MEM_KLIMIT

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

MEM_REQUEST

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

MEM_USED

PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerMEM_ACTIVE

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

NET_IN_BYTE_RATE

PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_IN_BIT_RATE

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

NET_IN_ERROR_RATE

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

NET_OUT_ERROR_RATE

Prometheus

DaemonSet, ReplicaSet, StatefulSet, ReplicationController

Kubernetes - Controller

NET_OUT_BYTE_RATE

PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_OUT_BIT_RATE
PrometheusDaemonSet, ReplicaSet, StatefulSet, ReplicationControllerKubernetes - ControllerNET_BIT_RATE

Prometheus

Namespace

Kubernetes - Namespace

CPU_LIMIT

Prometheus

Namespace

Kubernetes - Namespace

CPU_LIMIT_MAX

Prometheus

Namespace

Kubernetes - Namespace

CPU_REQUEST

Prometheus

Namespace

Kubernetes - Namespace

CPU_REQUEST_MAX

Prometheus

Namespace

Kubernetes - Namespace

CPU_USED_NUM

Prometheus

Namespace

Kubernetes - Namespace

CREATION_TIME

Prometheus

Namespace

Kubernetes - Namespace

KPOD_NUM_MAX

Prometheus

Namespace

Kubernetes - Namespace

MEM_KLIMIT

Prometheus

Namespace

Kubernetes - Namespace

MEM_LIMIT_MAX

Prometheus

Namespace

Kubernetes - Namespace

MEM_REQUEST

Prometheus

Namespace

Kubernetes - Namespace

MEM_REQUEST_MAX

Prometheus

Namespace

Kubernetes - Namespace

MEM_USED

PrometheusNamespaceKubernetes - NamespaceMEM_ACTIVE

Prometheus

Namespace

Kubernetes - Namespace

BYIMAGE_CPU_REQUEST

Prometheus

Namespace

Kubernetes - Namespace

BYIMAGE_MEM_REQUEST

Prometheus

Namespace

Kubernetes - Namespace

BYIMAGE_NUM

Prometheus

Namespace

Kubernetes - Namespace

BYSTATUS_KPOD_NUM

Prometheus

Namespace

Kubernetes - Namespace

CONTAINER_NUM

Prometheus

Namespace

Kubernetes - Namespace

KPOD_NUM

Prometheus

Namespace

Kubernetes - Namespace

CPU_UTIL

Prometheus

Namespace

Kubernetes - Namespace

NET_IN_BYTE_RATE

PrometheusNamespaceKubernetes - NamespaceNET_IN_BIT_RATE

Prometheus

Namespace

Kubernetes - Namespace

NET_IN_ERROR_RATE

Prometheus

Namespace

Kubernetes - Namespace

NET_OUT_ERROR_RATE

Prometheus

Namespace

Kubernetes - Namespace

NET_OUT_BYTE_RATE

PrometheusNamespace
Kubernetes - NamespaceNET_OUT_BIT_RATE
PrometheusNamespaceKubernetes - NamespaceNET_BIT_RATE

Prometheus

Node

Kubernetes - Node

CONTAINER_NUM

PrometheusNodeKubernetes - NodeBYSTATUS_KPOD_NUM

Prometheus

Node

Kubernetes - Node

CPU_LIMIT

Prometheus

Node

Kubernetes - Node

CPU_NUM

Prometheus

Node

Kubernetes - Node

CPU_REQUEST

Prometheus

Node

Kubernetes - Node

CPU_USED_NUM

Prometheus

Node

Kubernetes - Node

CREATION_TIME

Prometheus

Node

Kubernetes - Node

KPOD_NUM_MAX

Prometheus

Node

Kubernetes - Node

KUBERNETES_VERSION

Prometheus

Node

Kubernetes - Node

MEM_ACTIVE

Prometheus

Node

Kubernetes - Node

MEM_KLIMIT

Prometheus

Node

Kubernetes - Node

MEM_PAGE_MAJOR_FAULT_RATE

Prometheus

Node

Kubernetes - Node

MEM_REQUEST

Prometheus

Node

Kubernetes - Node

MEM_USED

Prometheus

Node

Kubernetes - Node

NET_IN_BYTE_RATE

PrometheusNodeKubernetes - NodeNET_IN_BIT_RATE

Prometheus

Node

Kubernetes - Node

NET_IN_ERROR_RATE

Prometheus

Node

Kubernetes - Node

NET_OUT_BYTE_RATE

PrometheusNodeKubernetes - NodeNET_OUT_BIT_RATE
PrometheusNodeKubernetes - NodeNET_BIT_RATE

Prometheus

Node

Kubernetes - Node

NET_OUT_ERROR_RATE

Prometheus

Node

Kubernetes - Node

OS_TYPE

Prometheus

Node

Kubernetes - Node

TOTAL_REAL_MEM

Prometheus

Node

Kubernetes - Node

UPTIME

Prometheus

Node

Kubernetes - Node

CPU_UTIL

Prometheus

Node

Kubernetes - Node

KPOD_NUM

Prometheus

Node

Kubernetes - Node

BYIMAGE_CPU_REQUEST

Prometheus

Node

Kubernetes - Node

BYIMAGE_MEM_REQUEST

Prometheus

Node

Kubernetes - Node

BYIMAGE_NUM

Prometheus

Node

Kubernetes - Node

MEM_UTIL

Kubernetes API

Persistent Volume

Kubernetes - Persistent Volume

CREATION_TIME

Prometheus

Persistent Volume

Kubernetes - Persistent Volume

ST_ALLOCATED

Kubernetes API

Persistent Volume

Kubernetes - Persistent Volume

ST_PATH

Kubernetes API

Persistent Volume

Kubernetes - Persistent Volume

ST_SIZE

Kubernetes API

Persistent Volume

Kubernetes - Persistent Volume

ST_TYPE

Prometheus

POD

Kubernetes - Pod

BYIMAGE_CPU_REQUEST

Prometheus

POD

Kubernetes - Pod

BYIMAGE_MEM_REQUEST

PrometheusPODKubernetes - PodBYIMAGE_NUM

Prometheus

POD

Kubernetes - Pod

CONTAINER_NUM

Prometheus

POD

Kubernetes - Pod

CPU_LIMIT

Prometheus

POD

Kubernetes - Pod

CPU_REQUEST

Prometheus

POD

Kubernetes - Pod

CPU_USED_NUM

PrometheusPODKubernetes - PodCPU_UTIL

Prometheus

POD

Kubernetes - Pod

CREATION_TIME

Prometheus

POD

Kubernetes - Pod

HOST_NAME

Prometheus

POD

Kubernetes - Pod

KPOD_STATUS

Prometheus

POD

Kubernetes - Pod

MEM_ACTIVE

Prometheus

POD

Kubernetes - Pod

MEM_KLIMIT

Prometheus

POD

Kubernetes - Pod

MEM_PAGE_MAJOR_FAULT_RATE

Prometheus

POD

Kubernetes - Pod

MEM_REQUEST

Prometheus

POD

Kubernetes - Pod

MEM_USED

Prometheus

POD

Kubernetes - Pod

NET_IN_BYTE_RATE

PrometheusPODKubernetes - PodNET_IN_BIT_RATE

Prometheus

POD

Kubernetes - Pod

NET_OUT_BYTE_RATE

PrometheusPODKubernetes - PodNET_OUT_BIT_RATE
PrometheusPODKubernetes - PodNET_BIT_RATE

Prometheus

POD

Kubernetes - Pod

NET_IN_ERROR_RATE

Prometheus

POD

Kubernetes - Pod

NET_OUT_ERROR_RATE

Prometheus

Cluster

Kubernetes - Cluster

BYIMAGE_CPU_REQUEST

Prometheus

Cluster

Kubernetes - Cluster

BYIMAGE_MEM_REQUEST

Prometheus

Cluster

Kubernetes - Cluster

BYIMAGE_NUM

Prometheus

Cluster

Kubernetes - Cluster

BYSTATUS_KPOD_NUM

Prometheus

Cluster

Kubernetes - Cluster

CONTAINER_NUM

Prometheus

Cluster

Kubernetes - Cluster

CPU_LIMIT

Prometheus

Cluster

Kubernetes - Cluster

CPU_NUM

Prometheus

Cluster

Kubernetes - Cluster

CPU_REQUEST

Prometheus

Cluster

Kubernetes - Cluster

CPU_USED_NUM

Prometheus

Cluster

Kubernetes - Cluster

CPU_UTIL

Prometheus

Cluster

Kubernetes - Cluster

CONTROLLER_NUM

Prometheus

Cluster

Kubernetes - Cluster

KPOD_NUM

Prometheus

Cluster

Kubernetes - Cluster

KPOD_NUM_MAX

Prometheus

Cluster

Kubernetes - Cluster

KUBERNETES_VERSION

Prometheus

Cluster

Kubernetes - Cluster

MEM_ACTIVE

Prometheus

Cluster

Kubernetes - Cluster

MEM_KLIMIT

Prometheus

Cluster

Kubernetes - Cluster

MEM_PAGE_MAJOR_FAULT_RATE

Prometheus

Cluster

Kubernetes - Cluster

MEM_REQUEST

Prometheus

Cluster

Kubernetes - Cluster

MEM_USED

Prometheus

Cluster

Kubernetes - Cluster

MEM_UTIL

Prometheus

Cluster

Kubernetes - Cluster

SECRET_NUM

Prometheus

Cluster

Kubernetes - Cluster

SERVICE_NUM

Prometheus

Cluster

Kubernetes - Cluster

ST_ALLOCATED

Prometheus

Cluster

Kubernetes - Cluster

TOTAL_REAL_MEM

Prometheus

Cluster

Kubernetes - Cluster

JOB_NUM




Derived Metrics
 derived metrics

Data Source Entity

Data Source Metric

BMC TrueSight Capacity Optimization Metric

BMC TrueSight Capacity Optimization Entity

Kubernetes - Pod

MEM_USED / MEM_KLIMIT

MEM_UTIL_LIMIT

Kubernetes - Pod

Kubernetes - Pod

MEM_USED / MEM_REQUEST

MEM_UTIL_REQUEST

Kubernetes - Pod

Kubernetes - Pod

CPU_USED_NUM / CPU_LIMIT

CPU_UTIL_LIMIT

Kubernetes - Pod

Kubernetes - Pod

CPU_USED_NUM / CPU_REQUEST

CPU_UTIL_REQUEST

Kubernetes - Pod

Kubernetes – Namespace

MEM_USED / MEM_KLIMIT

MEM_UTIL_LIMIT

Kubernetes - Namespace

Kubernetes - Namespace

MEM_USED / MEM_REQUEST

MEM_UTIL_REQUEST

Kubernetes - Namespace

Kubernetes - Namespace

CPU_USED_NUM / CPU_LIMIT

CPU_UTIL_LIMIT

Kubernetes - Namespace

Kubernetes - Namespace

CPU_USED_NUM / CPU_REQUEST

CPU_UTIL_REQUEST

Kubernetes - Namespace

Kubernetes - Cluster

MEM_USED / MEM_KLIMIT

MEM_UTIL_LIMIT

Kubernetes - Cluster

Kubernetes - Cluster

ST_SIZE

ST_SIZE

Kubernetes - Cluster

Kubernetes - Cluster

MEM_USED / MEM_REQUEST

MEM_UTIL_REQUEST

Kubernetes – Cluster

Kubernetes - Cluster

CPU_USED_NUM / CPU_LIMIT

CPU_UTIL_LIMIT

Kubernetes - Cluster

Kubernetes - Cluster

CPU_USED_NUM / CPU_REQUEST

CPU_UTIL_REQUEST

Kubernetes - Cluster

Kubernetes – Node

MEM_USED / MEM_KLIMIT

MEM_UTIL_LIMIT

Kubernetes - Node

Kubernetes - Node

MEM_USED / MEM_REQUEST

MEM_UTIL_REQUEST

Kubernetes - Node

Kubernetes - Node

CPU_USED_NUM / CPU_LIMIT

CPU_UTIL_LIMIT

Kubernetes - Node

Kubernetes - Node

CPU_USED_NUM / CPU_REQUEST

CPU_UTIL_REQUEST

Kubernetes - Node

Kubernetes - NodeTAG "node_role_kubernetes_io"SERVER_TYPEKubernetes - Node

Kubernetes - Controller

MEM_USED / MEM_KLIMIT

MEM_UTIL_LIMIT

Kubernetes - Controller

Kubernetes - Controller

MEM_USED / MEM_REQUEST

MEM_UTIL_REQUEST

Kubernetes - Controller

Kubernetes - Controller

CPU_USED_NUM / CPU_LIMIT

CPU_UTIL_LIMIT

Kubernetes - Controller

Kubernetes - Controller

CPU_USED_NUM / CPU_REQUEST

CPU_UTIL_REQUEST

Kubernetes - Controller


Lookup Field Considerations
 Lookup Consideration

Entity Type

Strong Lookup Field

Others

Kubernetes - Cluster

KUBE_CLUSTER&&KUBE_TYPE


Kubernetes - Namespace

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME


Kuberneteds - Node

KUBE_CLUSTER&&KUBE_TYPE&&HOSTNAME&&NAME

_COMPATIBILITY_

Kubernetes - Controller

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME&&KUBE_DP_NAME


Kubernetes - Pod

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME&&KUBE_POD_NAME


Kubernetes - Persistent Volume

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_PV_NAME


 

Tag Mapping (Optional)

The “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” supports Kubernetes Labels as Tags in TSCO TrueSight Capacity Optimization.
The keys in labels are imported as Tag types, and the values in labels are imported as Tag values. The integrator will replace special word delimiter appears in Kubernetes label key as underscore (_) in Tag type.
In particular, the entities we import labels are:

 Show Tags

Data Source Entity

Data Source


BMC TrueSight Capacity Optimization Entity

Prometheus

DAEMONSET

Kubernetes - Controller

Prometheus

STATEFULSET

Kubernetes - Controller

Kubernetes API

REPLICATSET

Kubernetes - Controller

Kubernetes API

REPLICATIONCONTROLLER

Kubernetes - Controller

Prometheus

POD

Kubernetes - Pod

Prometheus

NODE

Kubernetes - Node

Prometheus

NAMESPACE

Kubernetes - Namespace

Prometheus

PERSISTENT VOLUME

Kubernetes - Persistent Volume

Here’s a snapshot for what tag looks like:



k8s Heapster to k8s Prometheus Migration

The “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” supports a seamless transition from entities and metrics imported by the “Moviri Integrator for TrueSight Capacity Optimization – k8s Heapster”. Please follow these steps to migrate between the two integrators:

 Kubernetes ETL Integeration
  1. Stop “Moviri Integrator for TrueSight Capacity Optimization – k8s Heapster” ETL task.
  2. Install and configure the “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus”, ensuring that the lookup is shared with the “Moviri Integrator for TrueSight Capacity Optimization – k8s Heapster” ETL task.
  3. Start “Moviri Integrator for TrueSight Capacity Optimization – k8s Prometheus” ETL task.

Was this page helpful? Yes No Submitting... Thank you

Comments