Moviri Integrator for TrueSight Capacity Optimization - Kubernetes

“Moviri Integrator for TrueSight Capacity Optimization – Kubernetes” is an additional component of BMC TrueSight Capacity Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments.  Relevant capacity metrics are loaded into BMC TrueSight Capacity Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.

The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments and pods.

The documentation is targeted at BMC TrueSight Capacity Optimization administrators, in charge of configuring and monitoring the integration between BMC TrueSight Capacity Optimization and Kubernetes.

Please consider the following points when selecting the version for the "Moviri Integrator for TrueSight Capacity Optimization - Kubernetes"

  • v.2.0.x is available BMC TrueSight Capacity Optimzation v.10.7, 11 and 11.3
  • v.2.1.x is, v.19.11.x and v.20.02.x is available for BMC TrueSight Capacity Optimization v.11.x and v.20.x


Supported versions of data source software

  • Kubernetes versions 1.5 to 1.12
  • Rancher 1.6 to 2.3 (when managing a kubernetes cluster version 1.5 to 1.7)
  • Openshift 3.9 to 3.12

Please note that the integration requires running heapster on the kubernetes cluster.

Supported configurations of data source software

The Kubernetes connector requires the Heapster monitoring component of Kubernetes to be continuously and correctly monitoring the various entities supported by the integration, full list available below. Any lack in meeting this requirement will cause lack in data coverage.

Also, the connector requires access to the Kubernetes API. This is always enabled in any Kubernetes setup, hence no additional configuration is required.

Supported versions of TrueSight Capacity Optimization

  • Supported TrueSight Capacity Optimization versions: 10.7.01 onward


Downloading the additional package

ETL Module is made available in the form of an additional component, which you may download from BMC electronic distribution site (EPD) or retrieve from your content media.

Installing the additional package

To install the connector in the form of a TrueSight Capacity Optimization additional package, refer to Performing system maintenance tasks instructions.

Datasource Check and Configuration

Preparing to connect to the data source software

The Kubernetes connector adopts a specific integration strategy to be able to extract all of the key metrics of Kubernetes environments for capacity management in a scalable fashion. In particular, the connector integrates with two data sources:

  • Kubernetes API: to extract entity catalogs, relationships and relevant configuration properties 
  • Heapster: to extract key performance metrics related to the managed entities (e.g. nodes, pods, etc.)

The next sections outline the configuration required for the two data sources.

Kubernetes API

To access the Kubernetes API, the Kubernetes connector use a Service Account. The authentication will be performed using the service account token. Additionally, prevent accidental changes, the integrator service account will be granted read-only privileges and will be allowed to query a set of specific API endpoints.

Here follows an example procedure to create the service account in a Kubernetes cluster using the kubectl CLI.

Create a Service Account

First of all, create the service account to be used by the Kubernetes connector:

$ kubectl create serviceaccount tsco

Then, describe the service account to discover which secret is associated to it:

$ kubectl describe serviceaccount tsco

Name: tsco
Namespace: default
Labels: <none>
Annotations: <none>

Image pull secrets: <none>

Mountable secrets: tsco-token-6x9vs

Tokens: tsco-token-6x9vs

Now, describe the secret to get the corresponding token:

kubectl describe secret tsco-token-6x9vs

Name: tsco-token-6x9vs
Namespace: default
Labels: <none>


ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRzY28tdG9rZW4tNng5dnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHNjbyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA3YmNhNWU3LTdjM2UtMTFlNy04N2JjLTQyMDEwYThlMDAwMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnRzY28ifQ.tA6c9AsVJ0QKD0s-g9JoBdWDfhBvClJDiZGqDW6doS0rNZ5-dwXCJTss97PXfCxd5G8Q_nxg-elB8rV805K-j8ogf5Ykr-JLsAbl9OORRzcUCShYVF1r7O_-ikGg7abtIPh_mE5eAgHkJ1P6ODvaZG_0l1fak4BxZMTVfzzelvHpVlLpJZObd7eZOEtEEEkcAhZ2ajLQoxLucReG2A25_SrVJ-6c82BWBKQHcTBL9J2d0iHBHv-zjJzXHQ07F62vpc3Q6QI_rOvaJgaK2pMJYdQymFff8OfVMDQhp9LkOkxBPuJPmNHmHJxSvCcvpNtVMz-Hd495vruZFjtwYYygBQ

The token data ("eyJhb ... YygBQ") will be used by the Kubernetes integrator to authenticate against the API. Save the token as it will be requred at the ETL creation time.

Grant the service account read-only privileges

The following section outlines an example configuration on the Kubernetes cluster that is suggested in order to allow API access to the service account used by the integrator. We provide example configurations for the two most common authorization schemes used in Kubernetes clusters, namely RBAC (Role-Based Access Control) and ABAC (Attribute-Baased Access Control). To identify which mode is configured in your Kubernetes cluster, please refer to the official project documentation:

RBAC authorization

RBAC is the authorization mode enabled by default from Kubernetes 1.6 onward. To grant read-only privileges to the connector service account, a new cluster role is created. The new cluster role allows to grant specific read-only privileges to a set of API operations and entities to the connector.

Here is an example policy file that can be used for this purpose:

$ cat tsco-cluster-role.yml
kind: ClusterRole
  namespace: default
  name: tsco-cluster-role
- apiGroups: [""]
  resources: ["pods", "nodes", "namespaces", "replicationcontrollers", "persistentvolumes", "resourcequotas", "limitranges", "persistentvolumeclaims"]
  verbs: ["get", "list"]
- apiGroups: ["extensions"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list"]

Now, create the cluster role and associate it to the connector service account:

kubectl create -f tsco-cluster-role.yml
kubectl create clusterrolebinding tsco-view-binding --clusterrole=tsco-cluster-role --serviceaccount=default:tsco
ABAC authorization

ABAC authorization grants access rights to users and service accounts via policies that are configured in a policy file. Such file is then used by the Kubernetes API server via the startup parameter --authorization-policy-file.

In order to allow read-only access to the integrator service account, the following policy line need to be appended to the aforementioned policy file:

{"apiVersion": "", "kind": "Policy", "spec": {"user":"system:serviceaccount:default:tsco", "namespace": "*", "resource": "*", "apiGroup": "*", "readonly": true, "nonResourcePath": "*"}}

The apiserver will need to be restarted to pickup the new policy lines.

Test the configuration

After having performed the above configuration, it is useful to verify that the service account can successfully connects to the Kubernetes API and execute the intended operations.

To verify, execute the curl Linux command from one of the TrueSight Capacity Optimization servers:

$ curl -s https://<KUBERNETES_API_HOSTNAME>:<KUBERNETES_API_PORT>/api/v1/nodes -k --header "Authorization: Bearer <KUBERNETES_CONNECTOR_TOKEN>" 

You should get a JSON document describing the nodes comprising the Kubernetes cluster.


To ensure a scalable data collection of the cluster performance metrics, the Kubernetes integrator leverages the Heapster capability to push collected metrics to external data collection systems, called 'sinks'. Heapster supports a variety of sinks, for example InfluxDB or OpenTSDB. The Kubernetes connector will act as an Heapster sink of type OpenTSDB.

As such, the integration architecture for data collection of performance metrics works as follow:

  • On Kubernetes clusters: the Heapster monitoring component will send data to the Kubernetes integrator, that will act as a new sink 
  • On TrueSight Capacity Optimization side: the Kubernetes connector will listen on a TCP port on a designated TrueSight Capacity Optimization server and will collect the data pushed by Heapster


In order to implement the integration, the connector requires that Heapster be configured with an additional sink of type OpenTSDB. To perform this operation, add the following option to the Heapster startup command:



  • TRUESIGHT_CAPACITY_OPTIMIZATION_SERVER_HOSTNAME is the hostname (or IP address) of the TrueSight Capacity Optimization application server (or ETL engine) where the Kubernetes connector will be deployed
  • TRUESIGHT_CAPACITY_OPTIMIZATION_KUBERNETES_CONNECTOR_PORT is the TCP port on which the Kubernetes connector will listen. This must be set to the same value as the "TSCO Listening Port" in the ETL configuration page.
  • NAME_OF_THE_KUBERNETES_CLUSTER is a string representing the cluster name


In most Kubernetes clusters, Heapster runs as a deployment within the kube-system namespace. In this scenario, the above configuration change can be carried out by changing the relevant section of the Heapster deployment YAML manifest. After the change has been applied, the Heapster pod needs to be restarted for the changes to take effect. Please refer to the documentation of your Kubernetes vendor for more details on the actual Heapster configuration. 

Connector configuration

The following table shows specific properties of the connector, all the other generic properties are documented here.

Property Name

Value Type




Kubernetes Connection

Kubernetes Host         




Kubernetes API server hostname

For Openshift, use the Openshift console fqdn (e.g.,

For rancher, use the URL of the kubernetes API server, removing the protocol and port, (e.g., if your kubernetes API server is accessible at the URL "" , use "" as the value for this configuration parameter)

Kubernetes API PortNumberYes 

Kubernetes API server port

For openshift, use the same port as the console (typically 8443).

For rancher, use the port of the Kubernetes API server (typically 6443)

Kubernetes API VersionNumberYes Kubernetes API version
Kubernetes API ProtocolStringYes Kubernetes API protocol, "https" in most cases
Kubernetes Authentication TokenStringYes Token of the integrator service account (see data source configuration section)
TSCO Listening PortNumberYes

The TCP port the Kubernetes connector will listen to receive data from Heapster.

Acceptable values are 1024 to 65535.

Please ensure that this port is not already in use by any other process before running the ETL.

Kubernetes Extraction Filters

Select only PODs on the following nodesString 



Extracts information only for the pods that are currently running on the specified nodes.

Multiple nodes name can be separted by semicolumn.

Each deployment name can contain '%' and '_', like SQL LIKE expression

Select only PODs on the following namespacesString 



Extracts information only for the pods that are currently running in the specified namespaces

Multiple namesapces name can be separted by semicolumn.

Each deployment name can contain '%' and '_', like SQL LIKE expression

Select only PODs on the following deploymentsString 



Extracts information only for the pods that are currently running in the specified deployments.

Multiple deployments name can be separted by semicolumn.

Each deployment name can contain '%' and '_', like SQL LIKE expression

Exclude PODs on the following nodesString 



Does not extract information for the pods that are currently running on the specified nodes.

Multiple deployments name can be separted by semicolumn.

Each deployment name can contain '%' and '_', like SQL LIKE expression

Exclude PODs on the following namespacesString 



Does not extract information for the pods that are currently running in the specified deployments.

Multiple deployments name can be separted by semicolumn.

Each deployment name can contain '%' and '_', like SQL LIKE expression

Exclude PODs on the following deploymentsString 



Does not extract information for the pods that are currently running in the specified deployments.

Multiple deployments name can be separted by semicolumn.

Each deployment name can contain '%' and '_', like SQL LIKE expression

The following image shows the list of options in the ETL configuration menu, with also the advanced entries.

Supported Entities

The following entities are supported:

  • Kubernetes Cluster
  • Kubernetes Node
  • Kubernetes Namespace
  • Kubernetes Deployment
  • Kubernetes Pod
  • Kubernetes Persistent Volume


Supported Metrics

The following sections describe the metrics that are managed by the Kubernetes connector.

Datasource Metrics

This section describes the metrics that are imported by the Kubernetes connector from the data source and are assigned to corresponding TrueSight Capacity Optimization entities and metrics, with minimal transformations.

Data Source

Data Source Entity

Data Source Metric

BMC TrueSight Capacity Optimization Entity

BMC TrueSight Capacity Optimization Metric

Kubernetes API/api/v1/deploymentsspec.template.spec.containers.requests.cpuKubernetes - DeploymentBYIMAGE_CPU_REQUEST 
Kubernetes API/api/v1/deploymentsspec.template.spec.containers.requests.memoryKubernetes - DeploymentBYIMAGE_MEM_REQUEST 
Kubernetes API/api/v1/deploymentsitem.metadata.creationTimestampKubernetes - DeploymentCREATION_TIME 
Kubernetes API/api/v1/deploymentsdeployment.kindKubernetes - DeploymentDEPLOYMENT_TYPE 
Kubernetes API/api/v1/deploymentsitem.status.availableReplicasKubernetes - DeploymentKPOD_REPLICA_UPTODATE_NUM 
Kubernetes API/api/v1/deploymentsspec.template.spec.containers.imageKubernetes - DeploymentBYIMAGE_NUM 
HeapsterNAMESPACEcpu_limit_gaugeKubernetes - NamespaceCPU_LIMIT0.001
Kubernetes API/api/v1/namespacesitems.status.hard["limits.cpu"]Kubernetes - NamespaceCPU_LIMIT_MAX 
HeapsterNAMESPACEcpu_request_gaugeKubernetes - NamespaceCPU_REQUEST0.001
Kubernetes API/api/v1/namespacesitems.status.hard["requests.cpu"]Kubernetes - NamespaceCPU_REQUEST_MAX 
HeapsterNAMESPACEcpu_usage_rate_gaugeKubernetes - NamespaceCPU_USED_NUM0.001
Kubernetes API/api/v1/namespacesitem.metadata.creationTimestampKubernetes - NamespaceCREATION_TIME 
Kubernetes API/api/v1/namespacesitems.status.hard["pods"]Kubernetes - NamespaceKPOD_NUM_MAX 
HeapsterNAMESPACEmemory_limit_gaugeKubernetes - NamespaceMEM_KLIMIT 
Kubernetes API/api/v1/namespacesitems.spec.limits.max.memoryKubernetes - NamespaceMEM_LIMIT_MAX 
HeapsterNAMESPACEmemory_request_gaugeKubernetes - NamespaceMEM_REQUEST 
Kubernetes API/api/v1/namespacesitems.spec.limits.max.memoryKubernetes - NamespaceMEM_REQUEST_MAX 
HeapsterNAMESPACEmemory_usage_gaugeKubernetes - NamespaceMEM_USED 
Kubernetes API/api/v1/nodesimagesKubernetes - NodeCONTAINER_NUM 
HeapsterNODEcpu_limit_gaugeKubernetes - NodeCPU_LIMIT 
Kubernetes API/api/v1/nodesstatus.capacity.cpuKubernetes - NodeCPU_NUM 
HeapsterNODEcpu_request_gaugeKubernetes - NodeCPU_REQUEST 
HeapsterNODEcpu_usage_rate_gaugeKubernetes - NodeCPU_USED_NUM 
Kubernetes API/api/v1/nodesmetadata.creationTimestampKubernetes - NodeCREATION_TIME 
Kubernetes API/api/v1/nodesstatus.capacity.podsKubernetes - NodeKPOD_NUM_MAX 
Kubernetes API/api/v1/nodesstatus.nodeInfo.kubeletVersionKubernetes - NodeKUBERNETES_VERSION 
HeapsterNODEmemory_working_set_gaugeKubernetes - NodeMEM_ACTIVE 
HeapsterNODEmemory_limit_gaugeKubernetes - NodeMEM_KLIMIT 
HeapsterNODEmemory_major_page_faults_rate_gaugeKubernetes - NodeMEM_PAGE_MAJOR_FAULT_RATE 
HeapsterNODEmemory_request_gaugeKubernetes - NodeMEM_REQUEST 
HeapsterNODEmemory_usage_gaugeKubernetes - NodeMEM_USED 
HeapsterNODEnetwork_rx_rate_gaugeKubernetes - NodeNET_IN_BYTE_RATE 
HeapsterNODEnetwork_rx_errors_rate_gaugeKubernetes - NodeNET_IN_ERROR_RATE 
HeapsterNODEnetwork_tx_rate_gaugeKubernetes - NodeNET_OUT_BYTE_RATE 
HeapsterNODEnetwork_tx_errors_rate_gaugeKubernetes - NodeNET_OUT_ERROR_RATE 
Kubernetes API/api/v1/nodesstatus.nodeInfo.osImageKubernetes - NodeOS_TYPE 
Kubernetes API/api/v1/nodesstatus.capacity.memoryKubernetes - NodeTOTAL_REAL_MEM 
HeapsterNODEuptime_cumulativeKubernetes - NodeUPTIME 
Kubernetes API


metadata.creationTimestampKubernetes - Persistent VolumeCREATION_TIME 
Kubernetes API/api/v1/persistentvolumeclaimsspec.resources.requests.storageKubernetes - Persistent VolumeST_ALLOCATED 
Kubernetes API/api/v1/persistentvolumes




Kubernetes - Persistent VolumeST_PATH 
Kubernetes API/api/v1/persistentvolumeclaimsspec.capacity.storageKubernetes - Persistent VolumeST_SIZE 
Kubernetes API/api/v1/persistentvolumes


Kubernetes - Persistent VolumeST_TYPE 
Kubernetes API/api/v1/podsspec.containers.resources.requests.cpuKubernetes - PodBYIMAGE_CPU_REQUEST 
Kubernetes API/api/v1/podsspec.containers.resources.limits.cpuKubernetes - PodBYIMAGE_MEM_REQUEST 
Kubernetes API/api/v1/podsspec.containersKubernetes - PodCONTAINER_NUM 
HeapsterPODcpu_limit_gaugeKubernetes - PodCPU_LIMIT0.001
HeapsterPODcpu_request_gaugeKubernetes - PodCPU_REQUEST0.001
HeapsterPODcpu_usage_rate_gaugeKubernetes - PodCPU_USED_NUM0.001
Kubernetes API/api/v1/podsmetadata.creationTimestampKubernetes - PodCREATION_TIME 
Kubernetes API/api/v1/podsstatus.hostIPKubernetes - PodHOST_NAME 
Kubernetes API/api/v1/podsstatus.phaseKubernetes - PodKPOD_STATUS 
HeapsterPODmemory_working_set_gaugeKubernetes - PodMEM_ACTIVE 
HeapsterPODmemory_limit_gaugeKubernetes - PodMEM_KLIMIT 
HeapsterPODmemory_major_page_faults_rate_gaugeKubernetes - PodMEM_PAGE_MAJOR_FAULT_RATE 
HeapsterPODmemory_request_gaugeKubernetes - PodMEM_REQUEST 
HeapsterPODmemory_usage_gaugeKubernetes - PodMEM_USED 
HeapsterPODnetwork_rx_rate_gaugeKubernetes - PodNET_IN_BYTE_RATE 
HeapsterPODnetwork_tx_rate_gaugeKubernetes - PodNET_OUT_BYTE_RATE 
HeapsterPODnetwork_rx_errors_rate_gaugeKubernetes - PodNET_IN_ERROR_RATE 
HeapsterPODnetwork_tx_errors_rate_gaugeKubernetes - PodNET_OUT_ERROR_RATE 

Derived Metrics

This section describes the metrics that are derived by the Kubernetes connector from the data source metrics for the purpose of supporting a wide range of capacity management use cases and analyses.

Data Source

Data Source Entity

Data Source Metric

Target Entity

Target Metric

Aggregation type


Kubernetes - Namespace


TSCOKubernetes - Namespace BYIMAGE_NUMKubernetes - ClusterBYIMAGE_NUMSUM 
TSCOKubernetes - Namespace BYSTATUS_KPOD_NUMKubernetes - ClusterBYSTATUS_KPOD_NUMSUM 
TSCOKubernetes - Namespace CONTAINER_NUMKubernetes - ClusterCONTAINER_NUMSUM 
TSCOKubernetes - Node CPU_LIMITKubernetes - ClusterCPU_LIMITSUM 
TSCOKubernetes - Node CPU_NUMKubernetes - ClusterCPU_NUMSUM 
TSCOKubernetes - Node CPU_REQUESTKubernetes - ClusterCPU_REQUESTSUM 
TSCOKubernetes - Node CPU_USED_NUMKubernetes - ClusterCPU_USED_NUMSUM 
TSCOKubernetes - Namespace DEPLOYMENT_NUMKubernetes - ClusterDEPLOYMENT_NUMSUM 
TSCOKubernetes - NamespaceKPOD_NUMKubernetes - ClusterKPOD_NUMSUM 
TSCOKubernetes - NodeKPOD_NUM_MAXKubernetes - ClusterKPOD_NUM_MAXSUM 
TSCOKubernetes - NodeMEM_ACTIVEKubernetes - ClusterMEM_ACTIVESUM 
TSCOKubernetes - NodeMEM_KLIMITKubernetes - ClusterMEM_KLIMITSUM 
TSCOKubernetes - NodeMEM_REQUESTKubernetes - ClusterMEM_REQUESTSUM 
TSCOKubernetes - NodeMEM_USEDKubernetes - ClusterMEM_USEDSUM 
TSCOKubernetes - NamespaceSECRET_NUMKubernetes - ClusterSECRET_NUMSUM 
TSCOKubernetes - NamespaceSERVICE_NUMKubernetes - ClusterSERVICE_NUMSUM 
TSCOKubernetes - Persistent VolumeST_ALLOCATEDKubernetes - ClusterST_ALLOCATEDSUM 
TSCOKubernetes - Persistent VolumeST_SIZEKubernetes - ClusterST_SIZESUM 
TSCOKubernetes - NodeTOTAL_REAL_MEMKubernetes - ClusterTOTAL_REAL_MEMSUM 
TSCOKubernetes - Node


Kubernetes - NodeCPU_UTIL -
TSCOKubernetes - Pod-Kubernetes - NodeKPOD_NUMCOUNT 
TSCOKubernetes - Pod-Kubernetes - NodeBYIMAGE_NUMCOUNT 
TSCOKubernetes - NodeMEM_USED / TOTAL_REAL_MEMKubernetes - NodeMEM_UTIL
TSCOKubernetes - PodCONTAINER_NUMKubernetes - NamespaceBYIMAGE_NUMSUM 
TSCOKubernetes - PodKPOD_STATUSKubernetes - NamespaceBYSTATUS_KPOD_NUMSUM 
TSCOKubernetes - PodCONTAINER_NUMKubernetes - NamespaceCONTAINER_NUMSUM 
TSCOKubernetes - Pod-Kubernetes - NamespaceKPOD_NUMCOUNT 
TSCOKubernetes - NamespaceCPU_USED_NUM / CPU_LIMITKubernetes - NamespaceCPU_UTIL
TSCOKubernetes - Podspec.template.spec.containers + replicasKubernetes - DeploymentCONTAINER_NUM
TSCOKubernetes - PodCONTAINER_K_CPU_LIMITKubernetes - DeploymentCPU_LIMITSUM 
TSCOKubernetes - Podcpu_used_numKubernetes - DeploymentCPU_USED_NUMSUM 
TSCOKubernetes - PodCONTAINER_K_MEM_LIMITKubernetes - DeploymentMEM_KLIMIT SUM
TSCOKubernetes - Podmem_usedKubernetes - DeploymentMEM_USEDSUM 
TSCOKubernetes - PodNET_IN_BYTE_RATEKubernetes - DeploymentNET_IN_BYTE_RATESUM 
TSCOKubernetes - PodNET_IN_ERROR_RATEKubernetes - DeploymentNET_IN_ERROR_RATESUM 
TSCOKubernetes - PodNET_OUT_ERROR_RATEKubernetes - DeploymentNET_OUT_ERROR_RATESUM 
TSCOKubernetes - PodNET_OUT_BYTE_RATEKubernetes - DeploymentNET_OUT_BYTE_RATESUM 

For more details about the Kubernetes entities and resource management concepts, please refer to the official project documentation:


The connector is able to replicate relationships and logical dependencies among these entities as they are found configured within the Kubernetes cluster.

In particular, the following structure is applied:

  • a Kubernetes Cluster is attached to the root of the hierarchy
  • each Kubernetes Cluster contains its own Nodes, Namespaces and Persistent Volumes
  • each Kubernetes Namespace contains its own Deployments and (standalone) Pods

The following image shows a sample hierarchy.

Was this page helpful? Yes No Submitting... Thank you