Kubernetes
“Moviri Integrator for TrueSight Capacity Optimization – Kubernetes” is an additional component of BMC TrueSight Capacity Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments. Relevant capacity metrics are loaded into BMC TrueSight Capacity Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.
The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments and pods.
The documentation is targeted at BMC TrueSight Capacity Optimization administrators, in charge of configuring and monitoring the integration between BMC TrueSight Capacity Optimization and Kubernetes.
- Requirements
- Installation
- Datasource Check and Configuration
- Connector configuration
- Supported Entities
- Supported Metrics
- Hierarchy
Requirements
Supported versions of data source software
- Kubernetes versions 1.5 to 1.12
- Rancher 1.6 to 2.3 (when managing a kubernetes cluster version 1.5 to 1.7)
- Openshift 3.9 to 3.12
Please note that the integration requires running heapster on the kubernetes cluster.
Supported configurations of data source software
The Kubernetes connector requires the Heapster monitoring component of Kubernetes to be continuously and correctly monitoring the various entities supported by the integration, full list available below. Any lack in meeting this requirement will cause lack in data coverage.
Also, the connector requires access to the Kubernetes API. This is always enabled in any Kubernetes setup, hence no additional configuration is required.
Supported versions of TrueSight Capacity Optimization
- Supported TrueSight Capacity Optimization versions: 10.7.01 onward
Installation
Downloading the additional package
ETL Module is made available in the form of an additional component, which you may download from BMC electronic distribution site (EPD) or retrieve from your content media.
Installing the additional package
To install the connector in the form of a TrueSight Capacity Optimization additional package, refer to Performing system maintenance tasks instructions.
Datasource Check and Configuration
Preparing to connect to the data source software
The Kubernetes connector adopts a specific integration strategy to be able to extract all of the key metrics of Kubernetes environments for capacity management in a scalable fashion. In particular, the connector integrates with two data sources:
- Kubernetes API: to extract entity catalogs, relationships and relevant configuration properties
- Heapster: to extract key performance metrics related to the managed entities (e.g. nodes, pods, etc.)
The next sections outline the configuration required for the two data sources.
Kubernetes API
To access the Kubernetes API, the Kubernetes connector use a Service Account. The authentication will be performed using the service account token. Additionally, prevent accidental changes, the integrator service account will be granted read-only privileges and will be allowed to query a set of specific API endpoints.
Here follows an example procedure to create the service account in a Kubernetes cluster using the kubectl CLI.
Create a Service Account
First of all, create the service account to be used by the Kubernetes connector:
$ kubectl create serviceaccount tsco
Then, describe the service account to discover which secret is associated to it:
$ kubectl describe serviceaccount tscoName: tsco
Namespace: default
Labels: <none>
Annotations: <none>Image pull secrets: <none>Mountable secrets: tsco-token-6x9vsTokens: tsco-token-6x9vs
Now, describe the secret to get the corresponding token:
kubectl describe secret tsco-token-6x9vs
Name: tsco-token-6x9vs
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name=tsco
kubernetes.io/service-account.uid=07bca5e7-7c3e-11e7-87bc-42010a8e0002Type: kubernetes.io/service-account-tokenData
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRzY28tdG9rZW4tNng5dnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHNjbyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3YmNhNWU3LTdjM2UtMTFlNy04N2JjLTQyMDEwYThlMDAwMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnRzY28ifQ.tA6c9AsVJ0QKD0s-g9JoBdWDfhBvClJDiZGqDW6doS0rNZ5-dwXCJTss97PXfCxd5G8Q_nxg-elB8rV805K-j8ogf5Ykr-JLsAbl9OORRzcUCShYVF1r7O_-ikGg7abtIPh_mE5eAgHkJ1P6ODvaZG_0l1fak4BxZMTVfzzelvHpVlLpJZObd7eZOEtEEEkcAhZ2ajLQoxLucReG2A25_SrVJ-6c82BWBKQHcTBL9J2d0iHBHv-zjJzXHQ07F62vpc3Q6QI_rOvaJgaK2pMJYdQymFff8OfVMDQhp9LkOkxBPuJPmNHmHJxSvCcvpNtVMz-Hd495vruZFjtwYYygBQ
The token data ("eyJhb ... YygBQ") will be used by the Kubernetes integrator to authenticate against the API. Save the token as it will be requred at the ETL creation time.
Grant the service account read-only privileges
The following section outlines an example configuration on the Kubernetes cluster that is suggested in order to allow API access to the service account used by the integrator. We provide example configurations for the two most common authorization schemes used in Kubernetes clusters, namely RBAC (Role-Based Access Control) and ABAC (Attribute-Baased Access Control). To identify which mode is configured in your Kubernetes cluster, please refer to the official project documentation: https://kubernetes.io/docs/admin/authorization/
RBAC authorization
RBAC is the authorization mode enabled by default from Kubernetes 1.6 onward. To grant read-only privileges to the connector service account, a new cluster role is created. The new cluster role allows to grant specific read-only privileges to a set of API operations and entities to the connector.
Here is an example policy file that can be used for this purpose:
$ cat tsco-cluster-role.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: tsco-cluster-role
rules:
- apiGroups: [""]
resources: ["pods", "nodes", "namespaces", "replicationcontrollers", "persistentvolumes", "resourcequotas", "limitranges", "persistentvolumeclaims"]
verbs: ["get", "list"]
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list"]
Now, create the cluster role and associate it to the connector service account:
kubectl create -f tsco-cluster-role.yml
kubectl create clusterrolebinding tsco-view-binding --clusterrole=tsco-cluster-role --serviceaccount=default:tsco
ABAC authorization
ABAC authorization grants access rights to users and service accounts via policies that are configured in a policy file. Such file is then used by the Kubernetes API server via the startup parameter --authorization-policy-file.
In order to allow read-only access to the integrator service account, the following policy line need to be appended to the aforementioned policy file:
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:default:tsco", "namespace": "*", "resource": "*", "apiGroup": "*", "readonly": true, "nonResourcePath": "*"}}
The apiserver will need to be restarted to pickup the new policy lines.
Test the configuration
After having performed the above configuration, it is useful to verify that the service account can successfully connects to the Kubernetes API and execute the intended operations.
To verify, execute the curl Linux command from one of the TrueSight Capacity Optimization servers:
$ curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k --header "Authorization: Bearer <KUBERNETES_CONNECTOR_TOKEN>"
You should get a JSON document describing the nodes comprising the Kubernetes cluster.
Heapster
To ensure a scalable data collection of the cluster performance metrics, the Kubernetes integrator leverages the Heapster capability to push collected metrics to external data collection systems, called 'sinks'. Heapster supports a variety of sinks, for example InfluxDB or OpenTSDB. The Kubernetes connector will act as an Heapster sink of type OpenTSDB.
As such, the integration architecture for data collection of performance metrics works as follow:
- On Kubernetes clusters: the Heapster monitoring component will send data to the Kubernetes integrator, that will act as a new sink
- On TrueSight Capacity Optimization side: the Kubernetes connector will listen on a TCP port on a designated TrueSight Capacity Optimization server and will collect the data pushed by Heapster
In order to implement the integration, the connector requires that Heapster be configured with an additional sink of type OpenTSDB. To perform this operation, add the following option to the Heapster startup command:
--sink=opentsdb:http://<TRUESIGHT_CAPACITY_OPTIMIZATION_SERVER_HOSTNAME>:<TRUESIGHT_CAPACITY_OPTIMIZATION_KUBERNETES_CONNECTOR_PORT?cluster=<NAME_OF_THE_KUBERNETES_CLUSTER>
where:
- TRUESIGHT_CAPACITY_OPTIMIZATION_SERVER_HOSTNAME is the hostname (or IP address) of the TrueSight Capacity Optimization application server (or ETL engine) where the Kubernetes connector will be deployed
- TRUESIGHT_CAPACITY_OPTIMIZATION_KUBERNETES_CONNECTOR_PORT is the TCP port on which the Kubernetes connector will listen. This must be set to the same value as the "TSCO Listening Port" in the ETL configuration page.
- NAME_OF_THE_KUBERNETES_CLUSTER is a string representing the cluster name
In most Kubernetes clusters, Heapster runs as a deployment within the kube-system namespace. In this scenario, the above configuration change can be carried out by changing the relevant section of the Heapster deployment YAML manifest. After the change has been applied, the Heapster pod needs to be restarted for the changes to take effect. Please refer to the documentation of your Kubernetes vendor for more details on the actual Heapster configuration.
Connector configuration
The following table shows specific properties of the connector, all the other generic properties are documented here.
Property Name | Value Type | Required? | Default | Description |
Kubernetes Connection | ||||
Kubernetes Host | String | Yes |
| Kubernetes API server hostname For Openshift, use the Openshift console fqdn (e.g., console.ose.bmc.com). For rancher, use the URL of the kubernetes API server, removing the protocol and port, (e.g., if your kubernetes API server is accessible at the URL "http://rancher.bmc.com:8080/r/projects/1a16/kubernetes:6443" , use "rancher.bmc.com:8080/r/projects/1a16/kubernetes" as the value for this configuration parameter) |
Kubernetes API Port | Number | Yes |
| Kubernetes API server port For openshift, use the same port as the console (typically 8443). For rancher, use the port of the Kubernetes API server (typically 6443) |
Kubernetes API Version | Number | Yes |
| Kubernetes API version |
Kubernetes API Protocol | String | Yes |
| Kubernetes API protocol, "https" in most cases |
Kubernetes Authentication Token | String | Yes |
| Token of the integrator service account (see data source configuration section) |
TSCO Listening Port | Number | Yes | The TCP port the Kubernetes connector will listen to receive data from Heapster. Acceptable values are 1024 to 65535. Please ensure that this port is not already in use by any other process before running the ETL. | |
Kubernetes Extraction Filters | ||||
Select only PODs on the following nodes | String | No |
| Extracts information only for the pods that are currently running on the specified nodes. Multiple nodes name can be separted by semicolumn. Each deployment name can contain '%' and '_', like SQL LIKE expression |
Select only PODs on the following namespaces | String | No |
| Extracts information only for the pods that are currently running in the specified namespaces Multiple namesapces name can be separted by semicolumn. Each deployment name can contain '%' and '_', like SQL LIKE expression |
Select only PODs on the following deployments | String | No |
| Extracts information only for the pods that are currently running in the specified deployments. Multiple deployments name can be separted by semicolumn. Each deployment name can contain '%' and '_', like SQL LIKE expression |
Exclude PODs on the following nodes | String | No |
| Does not extract information for the pods that are currently running on the specified nodes. Multiple deployments name can be separted by semicolumn. Each deployment name can contain '%' and '_', like SQL LIKE expression |
Exclude PODs on the following namespaces | String | No |
| Does not extract information for the pods that are currently running in the specified deployments. Multiple deployments name can be separted by semicolumn. Each deployment name can contain '%' and '_', like SQL LIKE expression |
Exclude PODs on the following deployments | String | No |
| Does not extract information for the pods that are currently running in the specified deployments. Multiple deployments name can be separted by semicolumn. Each deployment name can contain '%' and '_', like SQL LIKE expression |
The following image shows the list of options in the ETL configuration menu, with also the advanced entries.
Supported Entities
The following entities are supported:
- Kubernetes Cluster
- Kubernetes Node
- Kubernetes Namespace
- Kubernetes Deployment
- Kubernetes Pod
- Kubernetes Persistent Volume
Supported Metrics
The following sections describe the metrics that are managed by the Kubernetes connector.
Datasource Metrics
This section describes the metrics that are imported by the Kubernetes connector from the data source and are assigned to corresponding TrueSight Capacity Optimization entities and metrics, with minimal transformations.
Data Source | Data Source Entity | Data Source Metric | BMC TrueSight Capacity Optimization Entity | BMC TrueSight Capacity Optimization Metric | Factor |
Kubernetes API | /api/v1/deployments | spec.template.spec.containers.requests.cpu | Kubernetes - Deployment | BYIMAGE_CPU_REQUEST |
|
Kubernetes API | /api/v1/deployments | spec.template.spec.containers.requests.memory | Kubernetes - Deployment | BYIMAGE_MEM_REQUEST |
|
Kubernetes API | /api/v1/deployments | item.metadata.creationTimestamp | Kubernetes - Deployment | CREATION_TIME |
|
Kubernetes API | /api/v1/deployments | deployment.kind | Kubernetes - Deployment | DEPLOYMENT_TYPE |
|
Kubernetes API | /api/v1/deployments | item.status.availableReplicas | Kubernetes - Deployment | KPOD_REPLICA_UPTODATE_NUM |
|
Kubernetes API | /api/v1/deployments | spec.template.spec.containers.image | Kubernetes - Deployment | BYIMAGE_NUM |
|
Heapster | NAMESPACE | cpu_limit_gauge | Kubernetes - Namespace | CPU_LIMIT | 0.001 |
Kubernetes API | /api/v1/namespaces | items.status.hard["limits.cpu"] | Kubernetes - Namespace | CPU_LIMIT_MAX |
|
Heapster | NAMESPACE | cpu_request_gauge | Kubernetes - Namespace | CPU_REQUEST | 0.001 |
Kubernetes API | /api/v1/namespaces | items.status.hard["requests.cpu"] | Kubernetes - Namespace | CPU_REQUEST_MAX |
|
Heapster | NAMESPACE | cpu_usage_rate_gauge | Kubernetes - Namespace | CPU_USED_NUM | 0.001 |
Kubernetes API | /api/v1/namespaces | item.metadata.creationTimestamp | Kubernetes - Namespace | CREATION_TIME |
|
Kubernetes API | /api/v1/namespaces | items.status.hard["pods"] | Kubernetes - Namespace | KPOD_NUM_MAX |
|
Heapster | NAMESPACE | memory_limit_gauge | Kubernetes - Namespace | MEM_KLIMIT |
|
Kubernetes API | /api/v1/namespaces | items.spec.limits.max.memory | Kubernetes - Namespace | MEM_LIMIT_MAX |
|
Heapster | NAMESPACE | memory_request_gauge | Kubernetes - Namespace | MEM_REQUEST |
|
Kubernetes API | /api/v1/namespaces | items.spec.limits.max.memory | Kubernetes - Namespace | MEM_REQUEST_MAX |
|
Heapster | NAMESPACE | memory_usage_gauge | Kubernetes - Namespace | MEM_USED |
|
Kubernetes API | /api/v1/nodes | images | Kubernetes - Node | CONTAINER_NUM |
|
Heapster | NODE | cpu_limit_gauge | Kubernetes - Node | CPU_LIMIT |
|
Kubernetes API | /api/v1/nodes | status.capacity.cpu | Kubernetes - Node | CPU_NUM |
|
Heapster | NODE | cpu_request_gauge | Kubernetes - Node | CPU_REQUEST |
|
Heapster | NODE | cpu_usage_rate_gauge | Kubernetes - Node | CPU_USED_NUM |
|
Kubernetes API | /api/v1/nodes | metadata.creationTimestamp | Kubernetes - Node | CREATION_TIME |
|
Kubernetes API | /api/v1/nodes | status.capacity.pods | Kubernetes - Node | KPOD_NUM_MAX |
|
Kubernetes API | /api/v1/nodes | status.nodeInfo.kubeletVersion | Kubernetes - Node | KUBERNETES_VERSION |
|
Heapster | NODE | memory_working_set_gauge | Kubernetes - Node | MEM_ACTIVE |
|
Heapster | NODE | memory_limit_gauge | Kubernetes - Node | MEM_KLIMIT |
|
Heapster | NODE | memory_major_page_faults_rate_gauge | Kubernetes - Node | MEM_PAGE_MAJOR_FAULT_RATE |
|
Heapster | NODE | memory_request_gauge | Kubernetes - Node | MEM_REQUEST |
|
Heapster | NODE | memory_usage_gauge | Kubernetes - Node | MEM_USED |
|
Heapster | NODE | network_rx_rate_gauge | Kubernetes - Node | NET_IN_BYTE_RATE |
|
Heapster | NODE | network_rx_errors_rate_gauge | Kubernetes - Node | NET_IN_ERROR_RATE |
|
Heapster | NODE | network_tx_rate_gauge | Kubernetes - Node | NET_OUT_BYTE_RATE |
|
Heapster | NODE | network_tx_errors_rate_gauge | Kubernetes - Node | NET_OUT_ERROR_RATE |
|
Kubernetes API | /api/v1/nodes | status.nodeInfo.osImage | Kubernetes - Node | OS_TYPE |
|
Kubernetes API | /api/v1/nodes | status.capacity.memory | Kubernetes - Node | TOTAL_REAL_MEM |
|
Heapster | NODE | uptime_cumulative | Kubernetes - Node | UPTIME |
|
Kubernetes API | /api/v1/persistentvolumes | metadata.creationTimestamp | Kubernetes - Persistent Volume | CREATION_TIME |
|
Kubernetes API | /api/v1/persistentvolumeclaims | spec.resources.requests.storage | Kubernetes - Persistent Volume | ST_ALLOCATED |
|
Kubernetes API | /api/v1/persistentvolumes | hostPath.path glusterfs.path nfs.path | Kubernetes - Persistent Volume | ST_PATH |
|
Kubernetes API | /api/v1/persistentvolumeclaims | spec.capacity.storage | Kubernetes - Persistent Volume | ST_SIZE |
|
Kubernetes API | /api/v1/persistentvolumes | item.spec | Kubernetes - Persistent Volume | ST_TYPE |
|
Kubernetes API | /api/v1/pods | spec.containers.resources.requests.cpu | Kubernetes - Pod | BYIMAGE_CPU_REQUEST |
|
Kubernetes API | /api/v1/pods | spec.containers.resources.limits.cpu | Kubernetes - Pod | BYIMAGE_MEM_REQUEST |
|
Kubernetes API | /api/v1/pods | spec.containers | Kubernetes - Pod | CONTAINER_NUM |
|
Heapster | POD | cpu_limit_gauge | Kubernetes - Pod | CPU_LIMIT | 0.001 |
Heapster | POD | cpu_request_gauge | Kubernetes - Pod | CPU_REQUEST | 0.001 |
Heapster | POD | cpu_usage_rate_gauge | Kubernetes - Pod | CPU_USED_NUM | 0.001 |
Kubernetes API | /api/v1/pods | metadata.creationTimestamp | Kubernetes - Pod | CREATION_TIME |
|
Kubernetes API | /api/v1/pods | status.hostIP | Kubernetes - Pod | HOST_NAME |
|
Kubernetes API | /api/v1/pods | status.phase | Kubernetes - Pod | KPOD_STATUS |
|
Heapster | POD | memory_working_set_gauge | Kubernetes - Pod | MEM_ACTIVE |
|
Heapster | POD | memory_limit_gauge | Kubernetes - Pod | MEM_KLIMIT |
|
Heapster | POD | memory_major_page_faults_rate_gauge | Kubernetes - Pod | MEM_PAGE_MAJOR_FAULT_RATE |
|
Heapster | POD | memory_request_gauge | Kubernetes - Pod | MEM_REQUEST |
|
Heapster | POD | memory_usage_gauge | Kubernetes - Pod | MEM_USED |
|
Heapster | POD | network_rx_rate_gauge | Kubernetes - Pod | NET_IN_BYTE_RATE |
|
Heapster | POD | network_tx_rate_gauge | Kubernetes - Pod | NET_OUT_BYTE_RATE |
|
Heapster | POD | network_rx_errors_rate_gauge | Kubernetes - Pod | NET_IN_ERROR_RATE |
|
Heapster | POD | network_tx_errors_rate_gauge | Kubernetes - Pod | NET_OUT_ERROR_RATE |
|
Derived Metrics
This section describes the metrics that are derived by the Kubernetes connector from the data source metrics for the purpose of supporting a wide range of capacity management use cases and analyses.
Data Source | Data Source Entity | Data Source Metric | Target Entity | Target Metric | Aggregation type |
TSCO | Kubernetes - Namespace | BYIMAGE_CPU_REQUEST | Kubernetes - Cluster | BYIMAGE_CPU_REQUEST | SUM |
TSCO | Kubernetes - Namespace | BYIMAGE_MEM_REQUEST | Kubernetes - Cluster | BYIMAGE_MEM_REQUEST | SUM |
TSCO | Kubernetes - Namespace | BYIMAGE_NUM | Kubernetes - Cluster | BYIMAGE_NUM | SUM |
TSCO | Kubernetes - Namespace | BYSTATUS_KPOD_NUM | Kubernetes - Cluster | BYSTATUS_KPOD_NUM | SUM |
TSCO | Kubernetes - Namespace | CONTAINER_NUM | Kubernetes - Cluster | CONTAINER_NUM | SUM |
TSCO | Kubernetes - Node | CPU_LIMIT | Kubernetes - Cluster | CPU_LIMIT | SUM |
TSCO | Kubernetes - Node | CPU_NUM | Kubernetes - Cluster | CPU_NUM | SUM |
TSCO | Kubernetes - Node | CPU_REQUEST | Kubernetes - Cluster | CPU_REQUEST | SUM |
TSCO | Kubernetes - Node | CPU_USED_NUM | Kubernetes - Cluster | CPU_USED_NUM | SUM |
TSCO | Kubernetes - Namespace | DEPLOYMENT_NUM | Kubernetes - Cluster | DEPLOYMENT_NUM | SUM |
TSCO | Kubernetes - Namespace | KPOD_NUM | Kubernetes - Cluster | KPOD_NUM | SUM |
TSCO | Kubernetes - Node | KPOD_NUM_MAX | Kubernetes - Cluster | KPOD_NUM_MAX | SUM |
TSCO | Kubernetes - Node | KUBERNETES_VERSION | Kubernetes - Cluster | KUBERNETES_VERSION | - |
TSCO | Kubernetes - Node | MEM_ACTIVE | Kubernetes - Cluster | MEM_ACTIVE | SUM |
TSCO | Kubernetes - Node | MEM_KLIMIT | Kubernetes - Cluster | MEM_KLIMIT | SUM |
TSCO | Kubernetes - Node | MEM_PAGE_MAJOR_FAULT_RATE | Kubernetes - Cluster | MEM_PAGE_MAJOR_FAULT_RATE | SUM |
TSCO | Kubernetes - Node | MEM_REQUEST | Kubernetes - Cluster | MEM_REQUEST | SUM |
TSCO | Kubernetes - Node | MEM_USED | Kubernetes - Cluster | MEM_USED | SUM |
TSCO | Kubernetes - Namespace | SECRET_NUM | Kubernetes - Cluster | SECRET_NUM | SUM |
TSCO | Kubernetes - Namespace | SERVICE_NUM | Kubernetes - Cluster | SERVICE_NUM | SUM |
TSCO | Kubernetes - Persistent Volume | ST_ALLOCATED | Kubernetes - Cluster | ST_ALLOCATED | SUM |
TSCO | Kubernetes - Persistent Volume | ST_SIZE | Kubernetes - Cluster | ST_SIZE | SUM |
TSCO | Kubernetes - Node | TOTAL_REAL_MEM | Kubernetes - Cluster | TOTAL_REAL_MEM | SUM |
TSCO | Kubernetes - Node | CPU_USED_NUM / CPU_NUM | Kubernetes - Node | CPU_UTIL | - |
TSCO | Kubernetes - Pod | - | Kubernetes - Node | KPOD_NUM | COUNT |
TSCO | Kubernetes - Pod | CONTAINER_K_CPU_REQUEST | Kubernetes - Node | BYIMAGE_CPU_REQUEST | SUM |
TSCO | Kubernetes - Pod | CONTAINER_K_CPU_REQUEST | Kubernetes - Node | BYIMAGE_MEM_REQUEST | SUM |
TSCO | Kubernetes - Pod | - | Kubernetes - Node | BYIMAGE_NUM | COUNT |
TSCO | Kubernetes - Node | MEM_USED / TOTAL_REAL_MEM | Kubernetes - Node | MEM_UTIL | - |
TSCO | Kubernetes - Pod | CONTAINER_K_CPU_REQUEST | Kubernetes - Namespace | BYIMAGE_CPU_REQUEST | SUM |
TSCO | Kubernetes - Pod | CONTAINER_K_MEM_REQUEST | Kubernetes - Namespace | BYIMAGE_MEM_REQUEST | SUM |
TSCO | Kubernetes - Pod | CONTAINER_NUM | Kubernetes - Namespace | BYIMAGE_NUM | SUM |
TSCO | Kubernetes - Pod | KPOD_STATUS | Kubernetes - Namespace | BYSTATUS_KPOD_NUM | SUM |
TSCO | Kubernetes - Pod | CONTAINER_NUM | Kubernetes - Namespace | CONTAINER_NUM | SUM |
TSCO | Kubernetes - Pod | - | Kubernetes - Namespace | KPOD_NUM | COUNT |
TSCO | Kubernetes - Namespace | CPU_USED_NUM / CPU_LIMIT | Kubernetes - Namespace | CPU_UTIL | - |
TSCO | Kubernetes - Pod | spec.template.spec.containers + replicas | Kubernetes - Deployment | CONTAINER_NUM | - |
TSCO | Kubernetes - Pod | CONTAINER_K_CPU_LIMIT | Kubernetes - Deployment | CPU_LIMIT | SUM |
TSCO | Kubernetes - Pod | CONTAINER_K_CPU_REQUEST | Kubernetes - Deployment | CPU_REQUEST | SUM |
TSCO | Kubernetes - Pod | cpu_used_num | Kubernetes - Deployment | CPU_USED_NUM | SUM |
TSCO | Kubernetes - Pod | CONTAINER_K_MEM_LIMIT | Kubernetes - Deployment | MEM_KLIMIT | SUM |
TSCO | Kubernetes - Pod | CONTAINER_K_MEM_REQUEST | Kubernetes - Deployment | MEM_REQUEST | SUM |
TSCO | Kubernetes - Pod | mem_used | Kubernetes - Deployment | MEM_USED | SUM |
TSCO | Kubernetes - Pod | NET_IN_BYTE_RATE | Kubernetes - Deployment | NET_IN_BYTE_RATE | SUM |
TSCO | Kubernetes - Pod | NET_IN_ERROR_RATE | Kubernetes - Deployment | NET_IN_ERROR_RATE | SUM |
TSCO | Kubernetes - Pod | NET_OUT_ERROR_RATE | Kubernetes - Deployment | NET_OUT_ERROR_RATE | SUM |
TSCO | Kubernetes - Pod | NET_OUT_BYTE_RATE | Kubernetes - Deployment | NET_OUT_BYTE_RATE | SUM |
TSCO | Kubernetes - Pod | MEM_USED / MEM_KLIMIT | Kubernetes - Pod | MEM_UTIL_LIMIT | - |
TSCO | Kubernetes - Pod | MEM_USED / MEM_REQUEST | Kubernetes - Pod | MEM_UTIL_REQUEST | - |
TSCO | Kubernetes - Pod | CPU_USED_NUM / CPU_LIMIT | Kubernetes - Pod | CPU_UTIL_LIMIT | - |
TSCO | Kubernetes - Pod | CPU_USED_NUM / CPU_REQUEST | Kubernetes - Pod | CPU_UTIL_REQUEST | - |
For more details about the Kubernetes entities and resource management concepts, please refer to the official project documentation:
- Compute Resource Management: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container
- Resource Quotas: https://kubernetes.io/docs/concepts/policy/resource-quotas
Hierarchy
The connector is able to replicate relationships and logical dependencies among these entities as they are found configured within the Kubernetes cluster.
In particular, the following structure is applied:
- a Kubernetes Cluster is attached to the root of the hierarchy
- each Kubernetes Cluster contains its own Nodes, Namespaces and Persistent Volumes
- each Kubernetes Namespace contains its own Deployments and (standalone) Pods
The following image shows a sample hierarchy.