Moviri Integrator for TrueSight Capacity Optimization - K8s Heapster

"Moviri Integrator for  TrueSight Capacity Optimization – k8s Heapster" is an additional component of BMC TrueSight Capacity Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments.  Relevant capacity metrics imported into BMC TrueSight Capacity Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.

The integration supports the extraction of both performance and configuration data across the different components of the Kubernetes system and can configurable via parameters that allow entity filtering and many other settings. Furthermore, the connector can replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.


The latest version of the integrator k8s (Kubernetes) Heapster is available on EPD. Click the Moviri Integrator for TrueSight Capacity Optimization link. In the Patches tab, select the latest version of TrueSight Capacity Optimization. This version of the connector compatible with BMC TrueSight Capacity Optimization 11.5 and onward.

The "Moviri Integrator for Kubernetes - Heapster" does not support the following features:

  • "Kubernetes - Pod Workload" entity type
  • Metrics imported at the Container level (BYCONT_*)
  • Metrics imported at the Container Image level (BYCONT_IMAGE_*)
  • High Mark metrics (for CPU and Memory metrics)

To enable these features, please consider to migrate to the "Moviri Integrator for Kubernetes - Prometheus"


This documentation is targeted at BMC TrueSight Capacity Optimization administrators, in charge of configuring and monitoring the integration between BMC TrueSight Capacity Optimization and Kubernetes.

Step I. Complete the Pre-Configuration Tasks

Step II. Configure the ETL

Step III. Run the ETL

Step IV. Verify Data Collection


Step I. Complete the Pre-Configuration Tasks


StepsDetails
Check if the required API version is supported
  • Kubernetes versions 1.5 to 1.17
    • Kubernetes versions 1.16+ use TLS 1.3 by default and there is a known issue with the Java implementation. Please contact BMC support for help setting up connections to these versions.
  • Rancher 1.6 to 2.3 (when managing a Kubernetes cluster version 1.5 to 1.7)
  • Openshift 3.9 to 4.1
  • Google Kubernetes Engine (GKE)

Please note that the integration requires running Heapster on the Kubernetes cluster.


Check the supported versions of data source configuration

The Kubernetes connector adopts a specific integration strategy to be able to extract all of the key metrics of Kubernetes environments for capacity management in a scalable fashion. In particular, the connector integrates with two data sources:

  • Kubernetes API: to extract entity catalogs, relationships, and relevant configuration properties 
  • Heapster: to extract key performance metrics related to the managed entities (e.g., nodes, pods)

The Kubernetes connector requires the Heapster monitoring component of Kubernetes to be continuously and correctly monitoring the various entities supported by the integration, full list available below. Missing these requirements can result in data loss.

Generate access to the Kubernetes API

Kubernetes makes use of service accounts for its authentication and authorization. Upon creation, each service account gets a unique token used for authentication and authorized with read-only privileges for querying a set of specific API endpoints. 

Please use the following example to create the service account in a Kubernetes cluster using the kubectl CLI program.

Create a Service Account

First, create the service account to be used by the Kubernetes connector:

Creating the Service Account with kubectl
$ kubectl create serviceaccount tsco

Then, describe the service account to discover which shows the associated secret:

Describing the Service Account
$ kubectl describe serviceaccount tsco
Name: tsco
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: tsco-token-6x9vs
Tokens: tsco-token-6x9vs

Now, describe the secret to get the corresponding token:

Get the Service Account Token
kubectl describe secret tsco-token-6x9vs
Name: tsco-token-6x9vs
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name=tsco
kubernetes.io/service-account.uid=07bca5e7-7c3e-11e7-87bc-42010a8e0002
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRzY28tdG9rZW4tNng5dnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHNjbyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3YmNhNWU3LTdjM2UtMTFlNy04N2JjLTQyMDEwYThlMDAwMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnRzY28ifQ.tA6c9AsVJ0QKD0s-g9JoBdWDfhBvClJDiZGqDW6doS0rNZ5-dwXCJTss97PXfCxd5G8Q_nxg-elB8rV805K-j8ogf5Ykr-JLsAbl9OORRzcUCShYVF1r7O_-ikGg7abtIPh_mE5eAgHkJ1P6ODvaZG_0l1fak4BxZMTVfzzelvHpVlLpJZObd7eZOEtEEEkcAhZ2ajLQoxLucReG2A25_SrVJ-6c82BWBKQHcTBL9J2d0iHBHv-zjJzXHQ07F62vpc3Q6QI_rOvaJgaK2pMJYdQymFff8OfVMDQhp9LkOkxBPuJPmNHmHJxSvCcvpNtVMz-Hd495vruZFjtwYYygBQ

The token data ("eyJhb ... YygBQ") is used by the Kubernetes integration to authenticate against the API. Make a note of the token for ETL creation later on.

Grant the service account read-only privileges

The following section outlines a suggested example configuration on the Kubernetes cluster to allow API access to the service account used by the integrator. We provide example configurations for the two most common authorization schemes used in Kubernetes clusters, namely RBAC (Role-Based Access Control) and ABAC (Attribute-Based Access Control). To identify which authorization scheme your Kubernetes cluster has, please refer to the official project documentation: https://kubernetes.io/docs/admin/authorization/

RBAC authorization

RBAC is the authorization mode enabled by default from Kubernetes 1.6 onward. In order to utilize this feature, you must create a new cluster role. The new cluster role allows needs read-only privileges to a set of API operations and entities for the integration to work correctly.

Here is an example policy file that creates a cluster role for the integration:

tsco-cluster-role.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: default
  name: tsco-cluster-role
rules:
- apiGroups: [""]
  resources: ["pods", "nodes", "namespaces", "replicationcontrollers", "persistentvolumes", "resourcequotas", "limitranges", "persistentvolumeclaims"]
  verbs: ["get", "list"]
- apiGroups: ["extensions"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list"]

Now, create the cluster role and associate it to the connector service account:

Creating and Binding the Cluster Role
kubectl create -f tsco-cluster-role.yml
kubectl create clusterrolebinding tsco-view-binding --clusterrole=tsco-cluster-role --serviceaccount=default:tsco 

ABAC authorization

ABAC authorization grants access rights to users and service accounts via configurable policies inside a YAML file. The Kubernetes API server then uses such file via the startup parameter --authorization-policy-file.

The following policy line needs to be appended to the policy file to allow read-only access for the service account:

ABAC JSON
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:default:tsco", "namespace": "*", "resource": "*", "apiGroup": "*", "readonly": true, "nonResourcePath": "*"}}

Restart the API server to pick up the new policy changes.

Testing the configuration

After having performed the above configuration, it is useful to verify that the service account can successfully connect to the Kubernetes API and execute the intended operations. 

To verify, execute the curl Linux command from one of the TrueSight Capacity Optimization servers:

Testing the Configuration
curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k --header "Authorization: Bearer <KUBERNETES_CONNECTOR_TOKEN>" 

You should get a JSON document describing the nodes comprising the Kubernetes cluster.

Heapster Configuration

The Kubernetes integrator acts like a data 'sink' for the Heapster service to dump cluster performance metrics at a custom resolution. Heapster supports a variety of sinks, but the integration uses the OpenTSDB protocol to receive Heapster messages.

The integration architecture for data collection of performance metrics works as follow:

  • On Kubernetes: the Heapster monitoring component sends data to the Kubernetes integrator via OpenTSDB
  • On TrueSight Capacity Optimization: the integration listens on a designated TCP port on the TrueSight Capacity Optimization server and stores the data in a database 

The Integration requires that Heapster be configured with an additional sink of type OpenTSDB. To perform this operation, add the following option to the Heapster startup command:

opentsdb sink
--sink=opentsdb:http://<SERVER_HOSTNAME>:<PORT>?cluster=< KUBERNETES_CLUSTER_NAME>

where:

  • SERVER_HOSTNAME is the hostname (or IP address) of the TrueSight Capacity Optimization application server (or ETL engine) where the Kubernetes connector runs on.
  • PORT is the TCP port on which the integration listens on. The port must be set to the same value as the "TSCO Listening Port" in the ETL configuration page.
  • KUBERNETES_CLUSTER_NAME is a string representing the cluster name.

In most Kubernetes clusters, Heapster runs as a deployment within the kube-system namespace. In this scenario, the above configuration change can be carried out by changing the relevant section of the Heapster deployment YAML manifest. After configuring the change, the Heapster pod needs to be restarted for the changes to take effect. Please refer to the documentation of your Kubernetes vendor for more details on the actual Heapster configuration.

Testing the configuration

After having performed the above configuration, it is useful to verify that the configured sink can successfully send the ETL engine. 

To verify, execute the TCPDUMP Linux command from the configured sink from above (TCPDUMP needs to be installed first and run as root/sudo):

opentsdb sink
$ [root@tsco ~]$ tcpdump port <PORT>
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
18:20:35.193593 IP <Kubernetes Cluster>.40620 > <TSCO Server>.distinct: Flags [S], seq 3853215873, win 28200, options [mss 1410,sackOK,TS val 3860290285 ecr 0,nop,wscale 7], length 0

Step II. Configure the ETL
  1. In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, Connection, and Kubernetes Extraction filters.
  3. On the Run Configuration tab, select Moviri - k8s Heapster Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You edit this field to customize the name.
  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog:
      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
    1. Click the Connection tab, and configure the following properties:

      Property Name

      Value Type

      Required?

      Default

      Description

      Connection

      Kubernetes Host         

      String

      Yes


      Kubernetes API server hostname

      For Openshift, use the Openshift console FQDN (e.g., console.ose.bmc.com).

      For Rancher, use the URL of the Kubernetes API server, removing the protocol and port, (e.g., if your Kubernetes API server is accessible at the URL "http://rancher.bmc.com:8080/r/projects/1a16/kubernetes:6443", use "rancher.bmc.com:8080/r/projects/1a16/kubernetes" as the value for this configuration parameter)

      Kubernetes API Port

      Number

      Yes


      Kubernetes API server port

      For Openshift, use the same port as the console (typically 8443).

      For Rancher, use the port of the Kubernetes API server (typically 6443)

      Kubernetes API Version

      Number

      Yes

       v1

      Kubernetes API version

      Kubernetes Authentication Token

      String

      Yes


      Token of the integrator service account (see data source configuration section)

      Kubernetes API Protocol

      String

      Yes

       https

      Kubernetes API protocol, "HTTPS" in most cases

      TSCO Listening Port

      Number

      Yes




      The TCP port the Kubernetes connector receives data from Heapster.

      Acceptable values are 1024 to 65535.

      Please ensure that this port is not already in use by any other process before running the ETL.

      Kubernetes Extraction Filters

      Data Resolution

      Drop Down

      Yes

      5 Minutes

      The resolution the metrics will aggregate to.

      Extract Pods

      Yes/No

      Yes

      Yes

      Allow importing of Pods into TrueSight Capacity Optimization

      Select only PODs on the following nodes

      String 

       No


      Extracts information only for the pods that are currently running on the specified nodes.

      Multiple nodes name are semicolon separated.

      Each deployment name can contain '%' and '_', like SQL LIKE expression.

      Select only PODs on the following namespaces

      String 

       No


      Extracts information only for the pods that are currently running in the specified namespaces

      Multiple namespaces name are semicolon separated.

      Each deployment name can contain '%' and '_', like SQL LIKE expression.

      Select only PODs on the following deployments

      String 

       No


      Extracts information only for the pods that are currently running in the specified deployments.

      Multiple deployments name are semicolon separated.

      Each deployment name can contain '%' and '_', like SQL LIKE expression.

      Exclude PODs on the following nodes

      String 

       No


      Does not extract information for the pods that are currently running on the specified nodes.

      Multiple deployments name are semicolon separated.

      Each deployment name can contain '%' and '_', like SQL LIKE expression.

      Exclude PODs on the following namespaces

      String 

       No


      Does not extract information for the pods that are currently running in the specified deployments.

      Multiple deployments name are semicolon separated.

      Each deployment name can contain '%' and '_', like SQL LIKE expression

      Exclude PODs on the following deployments

      String 

       No


      Does not extract information for the pods that are currently running in the specified deployments.

      Multiple deployments name are semicolon separated.

      Each deployment name can contain '%' and '_', like SQL LIKE expression

      Select Tags to Load

      String

      No

      node_role_kubernetes_io

      Extracts label information for all supported entity types.

      Multiple tags are semicolon separated.

      Can be 1:1 match with labels found in Kubernetes or replace dashes and periods with underscore

      Skip BYIMAGE metricsYes/NoNo
      Tells the integration not to import BYIMAGE metrics for better performance on the Data Warehouse

      All the other generic properties are documented here

      The ETL tasks page shows the details of the newly configured k8s Heapster ETL:

Step III. Run the ETL


After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

Running the ETL in simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.



  3. In the Run configurations table, click Edit to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click  in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

Running the ETL in the production mode

You can run the ETL manually when required. As of version 20.02.00.005 the integration delays the hierarchy to delay for six hours helping to reduce the load on the loader step.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit  to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Step IV. Verify Data Collection

Verify that the ETL ran successfully and check whether the k8s Heapster data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
To verify that the k8s Heapster data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > k8s Heapster > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Heapster instances.
  4. Click a k8s Heapster entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.

k8s Heapster WorkspaceDetails
Entities

Supported Entities

The following entities are supported:

  • Kubernetes Cluster
  • Kubernetes Node
  • Kubernetes Namespace (ns)
  • Kubernetes Deployment
  • Kubernetes Pod
  • Kubernetes Persistent Volume (pv) 
Hierarchy

The connector can replicate relationships and logical dependencies among these entities as they are found configured within the Kubernetes cluster.

The integration uses the following hierarchy:

  • A Kubernetes Cluster is attached to the root of the hierarchy
  • Each Kubernetes Cluster contains Nodes, Namespaces, and Persistent Volumes
  • Each Kubernetes Namespace contains Deployments and (standalone) Pods
  • Each Kubernetes Deployment contains Pods

The following image shows a sample hierarchy.

 

Configuration and Performance Metrics Mapping

Data Source

Data Source Entity

Data Source Metric

TSCO Entity

TSCO Metric

Scaling Factor

Kubernetes API

/api/v1/deployments

spec.template.spec.containers.requests.cpu

Kubernetes - Deployment

BYIMAGE_CPU_REQUEST


Kubernetes API

/api/v1/deployments

spec.template.spec.containers.requests.memory

Kubernetes - Deployment

BYIMAGE_MEM_REQUEST


Kubernetes API

/api/v1/deployments

item.metadata.creationTimestamp

Kubernetes - Deployment

CREATION_TIME


Kubernetes API

/api/v1/deployments

deployment.kind

Kubernetes - Deployment

DEPLOYMENT_TYPE


Kubernetes API

/api/v1/deployments

item.status.availableReplicas

Kubernetes - Deployment

KPOD_REPLICA_UPTODATE_NUM


Kubernetes API

/api/v1/deployments

spec.template.spec.containers.image

Kubernetes - Deployment

BYIMAGE_NUM


Kubernetes API

/api/v1/deployments


Kubernetes - Deployment

KPOD_NUM


Heapster

NAMESPACE

cpu_limit_gauge

Kubernetes - Namespace

CPU_LIMIT

0.001

Kubernetes API

/api/v1/namespaces

items.status.hard["limits.cpu"]

Kubernetes - Namespace

CPU_LIMIT_MAX


Heapster

NAMESPACE

cpu_request_gauge

Kubernetes - Namespace

CPU_REQUEST

0.001

Kubernetes API

/api/v1/namespaces

items.status.hard["requests.cpu"]

Kubernetes - Namespace

CPU_REQUEST_MAX


Heapster

NAMESPACE

cpu_usage_rate_gauge

Kubernetes - Namespace

CPU_USED_NUM

0.001

Kubernetes API

/api/v1/namespaces

item.metadata.creationTimestamp

Kubernetes - Namespace

CREATION_TIME


Kubernetes API

/api/v1/namespaces

items.status.hard["pods"]

Kubernetes - Namespace

KPOD_NUM_MAX


Heapster

NAMESPACE

memory_limit_gauge

Kubernetes - Namespace

MEM_KLIMIT


Kubernetes API

/api/v1/namespaces

items.spec.limits.max.memory

Kubernetes - Namespace

MEM_LIMIT_MAX


Heapster

NAMESPACE

memory_request_gauge

Kubernetes - Namespace

MEM_REQUEST


Kubernetes API

/api/v1/namespaces

items.spec.limits.max.memory

Kubernetes - Namespace

MEM_REQUEST_MAX


Heapster

NAMESPACE

memory_usage_gauge

Kubernetes - Namespace

MEM_USED








Kubernetes API

/api/v1/nodes

images

Kubernetes - Node

CONTAINER_NUM


Heapster

NODE

cpu_limit_gauge

Kubernetes - Node

CPU_LIMIT


Kubernetes API

/api/v1/nodes

status.capacity.cpu

Kubernetes - Node

CPU_NUM


Heapster

NODE

cpu_request_gauge

Kubernetes - Node

CPU_REQUEST


Heapster

NODE

cpu_usage_rate_gauge

Kubernetes - Node

CPU_USED_NUM


Kubernetes API

/api/v1/nodes

metadata.creationTimestamp

Kubernetes - Node

CREATION_TIME


Kubernetes API

/api/v1/nodes

status.capacity.pods

Kubernetes - Node

KPOD_NUM_MAX


Kubernetes API

/api/v1/nodes

status.nodeInfo.kubeletVersion

Kubernetes - Node

KUBERNETES_VERSION


Heapster

NODE

memory_working_set_gauge

Kubernetes - Node

MEM_ACTIVE


Heapster

NODE

memory_limit_gauge

Kubernetes - Node

MEM_KLIMIT


Heapster

NODE

memory_major_page_faults_rate_gauge

Kubernetes - Node

MEM_PAGE_MAJOR_FAULT_RATE


Heapster

NODE

memory_request_gauge

Kubernetes - Node

MEM_REQUEST


Heapster

NODE

memory_usage_gauge

Kubernetes - Node

MEM_USED


Heapster

NODE

network_rx_rate_gauge

Kubernetes - Node

NET_IN_BYTE_RATE


Heapster

NODE

network_rx_errors_rate_gauge

Kubernetes - Node

NET_IN_ERROR_RATE


Heapster

NODE

network_tx_rate_gauge

Kubernetes - Node

NET_OUT_BYTE_RATE


Heapster

NODE

network_tx_errors_rate_gauge

Kubernetes - Node

NET_OUT_ERROR_RATE


HeapsterNODEnetwork_rx_rate_gauge + network_tx_rate_gaugeKubernetes - NodeNET_BIT_RATE8

Kubernetes API

/api/v1/nodes

status.nodeInfo.osImage

Kubernetes - Node

OS_TYPE


Kubernetes API

/api/v1/nodes

status.capacity.memory

Kubernetes - Node

TOTAL_REAL_MEM


Heapster

NODE

uptime_cumulative

Kubernetes - Node

UPTIME


Kubernetes API

/api/v1/persistentvolumes

metadata.creationTimestamp

Kubernetes - Persistent Volume

CREATION_TIME


Kubernetes API

/api/v1/persistentvolumeclaims

spec.resources.requests.storage

Kubernetes - Persistent Volume

ST_ALLOCATED


Kubernetes API

/api/v1/persistentvolumes

hostPath.path

glusterfs.path

nfs.path

Kubernetes - Persistent Volume

ST_PATH


Kubernetes API

/api/v1/persistentvolumeclaims

spec.capacity.storage

Kubernetes - Persistent Volume

ST_SIZE


Kubernetes API

/api/v1/persistentvolumes

item.spec

Kubernetes - Persistent Volume

ST_TYPE


Kubernetes API

/api/v1/pods

spec.containers.resources.requests.cpu

Kubernetes - Pod

BYIMAGE_CPU_REQUEST


Kubernetes API

/api/v1/pods

spec.containers.resources.limits.cpu

Kubernetes - Pod

BYIMAGE_MEM_REQUEST


Kubernetes API

/api/v1/pods

spec.containers

Kubernetes - Pod

CONTAINER_NUM


Heapster

POD

cpu_limit_gauge

Kubernetes - Pod

CPU_LIMIT

0.001

Heapster

POD

cpu_request_gauge

Kubernetes - Pod

CPU_REQUEST

0.001

Heapster

POD

cpu_usage_rate_gauge

Kubernetes - Pod

CPU_USED_NUM

0.001

Kubernetes API

/api/v1/pods

metadata.creationTimestamp

Kubernetes - Pod

CREATION_TIME


Kubernetes API

/api/v1/pods

status.hostIP

Kubernetes - Pod

HOST_NAME


Kubernetes API

/api/v1/pods

status.phase

Kubernetes - Pod

KPOD_STATUS


Heapster

POD

memory_working_set_gauge

Kubernetes - Pod

MEM_ACTIVE


Heapster

POD

memory_limit_gauge

Kubernetes - Pod

MEM_KLIMIT


Heapster

POD

memory_major_page_faults_rate_gauge

Kubernetes - Pod

MEM_PAGE_MAJOR_FAULT_RATE


Heapster

POD

memory_request_gauge

Kubernetes - Pod

MEM_REQUEST


Heapster

POD

memory_usage_gauge

Kubernetes - Pod

MEM_USED


Heapster

POD

network_rx_rate_gauge

Kubernetes - Pod

NET_IN_BYTE_RATE


Heapster

POD

network_tx_rate_gauge

Kubernetes - Pod

NET_OUT_BYTE_RATE


Heapster

POD

network_rx_errors_rate_gauge

Kubernetes - Pod

NET_IN_ERROR_RATE


Heapster

POD

network_tx_errors_rate_gauge

Kubernetes - Pod

NET_OUT_ERROR_RATE

  

HeapsterPODnetwork_rx_rate_gauge + network_tx_rate_gaugeKubernetes - PodNET_BIT_RATE8
Derived Metrics

Data Source

Data Source Entity

Data Source Metric

Target Entity

Target Metric

Aggregation type

TSCO

Kubernetes - Namespace

BYIMAGE_CPU_REQUEST

Kubernetes - Cluster

BYIMAGE_CPU_REQUEST

SUM 

TSCO

Kubernetes - Namespace

BYIMAGE_MEM_REQUEST

Kubernetes - Cluster

BYIMAGE_MEM_REQUEST

SUM 

TSCO

Kubernetes - Namespace 

BYIMAGE_NUM

Kubernetes - Cluster

BYIMAGE_NUM

SUM 

TSCO

Kubernetes - Namespace 

BYSTATUS_KPOD_NUM

Kubernetes - Cluster

BYSTATUS_KPOD_NUM

SUM 

TSCO

Kubernetes - Namespace 

CONTAINER_NUM

Kubernetes - Cluster

CONTAINER_NUM

SUM 

TSCO

Kubernetes - Node 

CPU_LIMIT

Kubernetes - Cluster

CPU_LIMIT

SUM 

TSCO

Kubernetes - Node 

CPU_NUM

Kubernetes - Cluster

CPU_NUM

SUM 

TSCO

Kubernetes - Node 

CPU_REQUEST

Kubernetes - Cluster

CPU_REQUEST

SUM 

TSCO

Kubernetes - Node 

CPU_USED_NUM

Kubernetes - Cluster

CPU_USED_NUM

SUM 

TSCO

Kubernetes - Namespace 

DEPLOYMENT_NUM

Kubernetes - Cluster

DEPLOYMENT_NUM

SUM 

TSCO

Kubernetes - Namespace

KPOD_NUM

Kubernetes - Cluster

KPOD_NUM

SUM 

TSCO

Kubernetes - Node

KPOD_NUM_MAX

Kubernetes - Cluster

KPOD_NUM_MAX

SUM 

TSCO

Kubernetes - Node

KUBERNETES_VERSION

Kubernetes - Cluster

KUBERNETES_VERSION

TSCO

Kubernetes - Node

MEM_ACTIVE

Kubernetes - Cluster

MEM_ACTIVE

SUM 

TSCO

Kubernetes - Node

MEM_KLIMIT

Kubernetes - Cluster

MEM_KLIMIT

SUM 

TSCO

Kubernetes - Node

MEM_PAGE_MAJOR_FAULT_RATE

Kubernetes - Cluster

MEM_PAGE_MAJOR_FAULT_RATE

SUM 

TSCO

Kubernetes - Node

MEM_REQUEST

Kubernetes - Cluster

MEM_REQUEST

SUM 

TSCO

Kubernetes - Node

MEM_USED

Kubernetes - Cluster

MEM_USED

SUM 

TSCO

Kubernetes - Namespace

SECRET_NUM

Kubernetes - Cluster

SECRET_NUM

SUM 

TSCO

Kubernetes - Namespace

SERVICE_NUM

Kubernetes - Cluster

SERVICE_NUM

SUM 

TSCO

Kubernetes - Persistent Volume

ST_ALLOCATED

Kubernetes - Cluster

ST_ALLOCATED

SUM 

TSCO

Kubernetes - Persistent Volume

ST_SIZE

Kubernetes - Cluster

ST_SIZE

SUM 

TSCO

Kubernetes - Node

TOTAL_REAL_MEM

Kubernetes - Cluster

TOTAL_REAL_MEM

SUM 

TSCO

Kubernetes - Node

CPU_USED_NUM / CPU_NUM

Kubernetes - Node

CPU_UTIL

 -

TSCO

Kubernetes - Pod

-

Kubernetes - Node

KPOD_NUM

COUNT 

TSCO

Kubernetes - Pod

CONTAINER_K_CPU_REQUEST

Kubernetes - Node

BYIMAGE_CPU_REQUEST

SUM 

TSCO

Kubernetes - Pod

CONTAINER_K_CPU_REQUEST

Kubernetes - Node

BYIMAGE_MEM_REQUEST

SUM 

TSCO

Kubernetes - Pod

-

Kubernetes - Node

BYIMAGE_NUM

COUNT 

TSCO

Kubernetes - Node

MEM_USED / TOTAL_REAL_MEM

Kubernetes - Node

MEM_UTIL

TSCO

Kubernetes - Pod

CONTAINER_K_CPU_REQUEST

Kubernetes - Namespace

BYIMAGE_CPU_REQUEST

SUM 

TSCO

Kubernetes - Pod

CONTAINER_K_MEM_REQUEST

Kubernetes - Namespace

BYIMAGE_MEM_REQUEST

SUM 

TSCO

Kubernetes - Pod

CONTAINER_NUM

Kubernetes - Namespace

BYIMAGE_NUM

SUM 

TSCO

Kubernetes - Pod

KPOD_STATUS

Kubernetes - Namespace

BYSTATUS_KPOD_NUM

SUM 

TSCO

Kubernetes - Pod

CONTAINER_NUM

Kubernetes - Namespace

CONTAINER_NUM

SUM 

TSCO

Kubernetes - Pod

-

Kubernetes - Namespace

KPOD_NUM

COUNT 

TSCO

Kubernetes - Namespace

CPU_USED_NUM / CPU_LIMIT

Kubernetes - Namespace

CPU_UTIL

TSCO

Kubernetes - Pod

spec.template.spec.containers + replicas

Kubernetes - Deployment

CONTAINER_NUM

TSCO

Kubernetes - Pod

CONTAINER_K_CPU_LIMIT

Kubernetes - Deployment

CPU_LIMIT

SUM 

TSCO

Kubernetes - Pod

CONTAINER_K_CPU_REQUEST

Kubernetes - Deployment

CPU_REQUEST

SUM 

TSCO

Kubernetes - Pod

CPU_USED_NUM

Kubernetes - Deployment

CPU_USED_NUM

SUM 

TSCO

Kubernetes - Pod

CONTAINER_K_MEM_LIMIT

Kubernetes - Deployment

MEM_KLIMIT

 SUM

TSCO

Kubernetes - Pod

CONTAINER_K_MEM_REQUEST

Kubernetes - Deployment

MEM_REQUEST

SUM 

TSCO

Kubernetes - Pod

MEM_USED

Kubernetes - Deployment

MEM_USED

SUM 

TSCO

Kubernetes - Pod

NET_IN_BYTE_RATE

Kubernetes - Deployment

NET_IN_BYTE_RATE

SUM 

TSCO

Kubernetes - Pod

NET_IN_ERROR_RATE

Kubernetes - Deployment

NET_IN_ERROR_RATE

SUM 

TSCO

Kubernetes - Pod

NET_OUT_ERROR_RATE

Kubernetes - Deployment

NET_OUT_ERROR_RATE

SUM 

TSCO

Kubernetes - Pod

NET_OUT_BYTE_RATE

Kubernetes - Deployment

NET_OUT_BYTE_RATE

SUM 

TSCOKubernetes - PodNET_BIT_RATEKubernetes - DeploymentNET_BIT_RATESUM

TSCO

Kubernetes - Pod

MEM_USED / MEM_KLIMIT

Kubernetes - Pod

MEM_UTIL_LIMIT

TSCO

Kubernetes - Pod

MEM_USED / MEM_REQUEST

Kubernetes - Pod

MEM_UTIL_REQUEST

TSCO

Kubernetes - Pod

CPU_USED_NUM / CPU_LIMIT

Kubernetes - Pod

CPU_UTIL_LIMIT

TSCO

Kubernetes - Pod

CPU_USED_NUM / CPU_REQUEST

Kubernetes - Pod

CPU_UTIL_REQUEST

For more details about the Kubernetes entities and resource management concepts, please refer to the official project documentation:

Lookup Field Considerations

Entity Type

Strong Lookup Field

Others

Kubernetes - Cluster

KUBE_CLUSTER&&KUBE_TYPE


Kubernetes - Namespace

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME


Kuberneteds - Node

KUBE_CLUSTER&&KUBE_TYPE&&HOSTNAME&&NAME

_COMPATIBILITY_

Kubernetes - Controller

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME&&KUBE_DP_NAME


Kubernetes - Pod

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_NS_NAME&&KUBE_POD_NAME


Kubernetes - Persistent Volume

KUBE_CLUSTER&&KUBE_TYPE&&KUBE_PV_NAME


Tag Mapping (Optional)


The connector is able to import labels, which are key/value pairs attached to entities as they are configured within the Kubernetes cluster.

The keys in labels are imported as Tag type, and the value in labels are imported as Tag value from Kubernetes API. Tag type has underscore (_) as word delimiter.

When specify labels want to import in configuration, specify the keys of the labels. They can be in the original format appears on Kubernetes API, or they can be using underscore (_) as delimiter. For example, node_role_kubernetes_io_compute and node-role.kubernetes.io/compute are equivalent and will be imported as node_role_kubernetes_io_compute.

In particular, the entities we import labels are:

Data Source

Data Source Entity

Data Source Metric

TSCO Entity

Kubernetes API

/apis/apps/v1/deployments

Items.metadata.labels

Kubernetes – Deployment

Kubernetes API

/api/v1/nodes

Items.metadata.labels

Kubernetes – Node

Kubernetes API

/api/v1/namespaces

Items.metadata.labels

Kubernetes – Namespace

Kubernetes API

/api/v1/pods

Items.metadata.labels

Kubernetes – Pod

Kubernetes API

/api/v1/persistentvolumes

Items.metadata.labels

Kubernetes – Persistent Volume

Here is a snapshot for what a tag looks like:

Was this page helpful? Yes No Submitting... Thank you

Comments