Moviri – K8s (Kubernetes) Prometheus Extractor
“Moviri Integrator for BMC Helix Continuous Optimization – k8s (Kubernetes) Prometheus” is an additional component of BMC Helix Continuous Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments. Relevant capacity metrics are loaded into BMC Helix Continuous Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.
The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore, the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.
The documentation is targeted at BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC Helix Continuous Optimization and Kubernetes.
This version of the connector compatible with BMC Helix Continuous Optimization 19.11 and onward.
If you used the “Moviri Integrator for BMC Helix Continuous Optimization– k8s Prometheus” before, please expect the following change from version 20.02.01:
- A new entity type "pod workload" will be imported to replace pods. If you imported pods before, all the Kubernetes pods will be gone and left in All systems and business drivers → "Unassigned".
- A new naming convention is implemented for Kubernetes Controller replicasets (the suffix of the name will be removed, eg: replicaset-abcd will be named as replicaset). All the old controller will be replaced by the new controller with the new name, and the old ones will be left in All systems and business drivers → "Unassigned".
Step I. Complete the preconfiguration tasks
Step IV. Verify the data collection
k8s Heapster to k8s Prometheus Migration
Steps | Details |
---|---|
Check the required API version is supported | |
Check supported versions of data source configuration | |
Generate access to Prometheus API | The access to Prometheus API depends on the Kubernetes distribution and the Prometheus configuration. The following sections describe the standard procedure for the supported platforms. |
Verify Prometheus API access | |
Generate Access to Kubernetes API (Optional) | Access to Kubernetes API is optional, but highly recommended. Follow the following steps for Kubernetes API access on Openshift and GKE: |
Verify Kubernetes API access | To verify, execute the curl Linux command from one of the BMC Helix Continuous Optimization servers: |
Kubernetes 1.16 and onward Workaround | For Kubernetes version 1.16 and onward, there's a known issue related to TLS v1.3 and BMC Helix Continuous Optimization Java version incompatibility. We provide a workaround to use the TLS v1.2 instead of default TLS v1.3 for connection. We implemented a hidden property for using TLSv1.2 in order to work with Kubernetes 1.16 and onward. Add hidden property "prometheus.use.tlsv12" from ETL configuration, and set the value as "true". If you encounter the erros as cannot connect, there's also PKIX issue need to be fixed. To fix the "PKIX path validation failed: java.security.cert.CertPathValidatorException: signature check failed" error follow these steps. |
A. Configuring the basic properties
Some of the basic properties display default values. You can modify these values if required.
To configure the basic properties:
- In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection
On the Run Configuration tab, select Moviri - k8s Prometheus Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.
- Click the Entity catalog tab, and select one of the following options:
Shared Entity Catalog:
- From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
- Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
Click the Connection tab, and configure the following properties:
Property Name | Value Type | Required? | Default | Description |
Prometheus – API URL | String | Yes | Prometheus API URL (http/https://hostname:port). Port can be emitted. | |
Prometheus – API Version | String | Yes | v1 | Prometheus API version, this should be the same as the Kubernetes API version if using any. |
Prometheus – API Authentication Method | String | Yes | Prometheus API authentication method. There are four methods that are supported: Authentication Token (Bearer), Basic Authentication (username/password), None (no authentication), OpenShift OAuth (Username/Password). | |
Prometheus – Username | String | No | Prometheus API username if the Authentication method is set to Basic Authentication. | |
Prometheus – Password | String | No | Prometheus API password if the Authentication method is set to Basic Authentication. | |
Prometheus – API Authentication Token | String | No | Prometheus API Authentication Token (Bearer Token) if the Authentication method is set to Authentication Token. | |
OAuth - URL | String | No | URL for the OpenShift OAuth (http/https://hostname:port). Only available if Authentication Method is set to "OpenShift OAuth". Port can be emitted. | |
OAuth - Username | String | No | OpenShift OAuth username if the Authentication Method is set to OpenShift OAuth. | |
OAuth - Password | String | No | OpenShift OAuth password if the Authentication Method is set to OpenShift OAuth. | |
Prometheus – Use Proxy Server | Boolean | No | If a proxy server is used when chose either Basic Authentication or None. Proxy sever supports HTTP. Proxy server only support Basic Authentication and None Authentication. | |
Prometheus - Proxy Server Host | String | No | Proxy server host name. | |
Prometheus - Proxy Server Port | Number | No | Proxy server port. Default 8080. | |
Prometheus - Proxy Username | String | No | Proxy server username | |
Prometheus - Proxy Password | String | No | Proxy server password | |
Use Kubernetes API | Boolean | Yes | If use Kubernetes API or not. | |
Kubernetes Host | String | Yes |
| Kubernetes API server host name For Openshift, use the Openshift console FQDN (e.g., console.ose.bmc.com). |
Kubernetes API Port | Number | Yes |
| Kubernetes API server port For Openshift, use the same port as the console (typically 8443). |
Kubernetes API Protocol | String | Yes | HTTPS | Kubernetes API protocol, "HTTPS" in most cases |
Kubernetes Authentication Token | String | No |
| Token of the integrator service account (see data source configuration section). If the Authentication Method is set to OpenShift OAuth, then you do not need to put a token in this field. Please make sure the User account has the right permissions specified in the data source configuration section for Kubernetes API access. |
6. Click the Prometheus Extraction tab, and configure the following properties:
Property Name | Value Type | Required | Default | Description | |
Data Resolution | String | Yes | Data resolution for data to be pulled from Prometheus into TSCO. Default is set to 5 minutes. any value less than 5 minutes will be set to default 5 minutes. | ||
Cluster Name | String | No | If Kubernetes API is not in use, cluster name must be specified. | ||
Default Last Counter | String | Yes | Default earliest time the connector should be pulling data from in UTC. Format as YYYY-MM-DDTHH24:MI:SS.SSSZ, for example, 2019-01-01T19:00:00.000Z. | ||
Maximum Hours to Extract | String | No | 120 | Maximum hours the connector can pull data from. If leave empty, using default 5 days from default last counter. Please consider that if no data is found on Prometheus, the integration will update the last counter to the maximum extraction period of consideration, starting from the last counter. | |
Lag hour to the current time | String | No | Lag hour to the current time | ||
Extract PODWorkload | Boolean | Yes | No | If wants to import podworkload | |
Extract Controller Metrics | Boolean | Yes | Yes | If the user wants to import Controller (Deployment, DaemonSet, ReplicaSet..) metrics. If POD Workload and Controller are both not selected, the ETL will not create any Controller metrics. if Controllers are not selected and POD Workload are selected for import, the ETL will create empty Controller entities. | |
Do you want to pull cluster, namespace from API? | Boolean | No | Yes | If pulling performance metrics from namespace, controller and cluster directly. Default is yes. This can be changed to no if the extraction from Prometheus API takes too long. If set to no: 1) the cluster performance metrics will be aggregated from nodes' performance metrics 2) If the extracting podworkload, then the namespace and container performance metrics will be aggregated from podworloads; If do not extract podworkload, namespace will be aggregated from controllers' metrics, while controllers' metrics will come from API directly. | |
Import by image metric on all entities? | String | No | Yes | If want to import BYIMAGE metrics on cluster, namespace, controller and nodes | |
Import podworkload metric highmark counters? | String | No | No | If want to import Highmark metrics for CPU_USED_NUM, MEM_USED on podworkload for by container metrics | |
Import only the following tags (semi-colon separated list) | String | No | Import only tag types (Kubernetes Label key). Specify the keys of the labels. They can be in the original format appears on Kubernetes API, or they can be using underscore (_) as delimiter. For example, node_role_kubernetes_io_compute and node-role.kubernetes.io/compute are equivalent and will be imported as node_role_kubernetes_io_compute. |
The following image shows a Run Configuration example for the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus":
7. (Optional) Enable TLS v1.2 for Kubernetes 1.16 and above
If you are using Kubernetes 1.16, or Openshift 4 and above, there's an incompatibility between Java and TLS v1.3. We are providing a workaround to use TLSv 1.2 for connection. To add the hidden property:
a. On ETL configuration page's very bottom, there's link to manually modifying the ETL property:
b. On the manually editing property page, add the property: "prometheus.use.tlsv12"
c. Set the value as "true", and save the change
8. (Optional) Override the default values of the properties:
(Optional) B. Configuring the advanced properties
You can configure the advanced properties to change the way the ETL works or to collect additional metrics.
To configure the advanced properties:
- On the Add ETL page, click Advanced.
Configure the following properties:
3.Click Save.The ETL tasks page shows the details of the newly configured Prometheus ETL:
After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:
A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.
B. Production mode: Collects data from the data source.
A. Running the ETL in the simulation mode
To run the ETL in the simulation mode:
- In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
- On the ETL tasks page, click the ETL. The ETL details are displayed.
- In the Run configurations table, click Edit to modify the ETL configuration settings.
- On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
- Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
- On the ETL tasks page, check the ETL run status in the Last exit column.
OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode. - If the ETL run status is Warning, Error, or Failed:
- On the ETL tasks page, click in the last column of the ETL name row.
- Check the log and reconfigure the ETL if required.
- Run the ETL again.
- Repeat these steps until the ETL run status changes to OK.
B. Running the ETL in the production mode
You can run the ETL manually when required or schedule it to run at a specified time.
Running the ETL manually
- On the ETL tasks page, click the ETL. The ETL details are displayed.
- In the Run configurations table, click Edit to modify the ETL configuration settings. The Edit run configuration page is displayed.
- On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
- To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
When the ETL is run, it collects data from the source and transfers it to the database.
Scheduling the ETL run
By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.
To configure the ETL run schedule:
- On the ETL tasks page, click the ETL, and click Edit Task
On the Edit task page, do the following, and click Save:
- Specify a unique name and description for the ETL task.
- In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
- Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
- Select the task group and the scheduler to which you want to assign the ETL task.
Click Schedule. A message confirming the scheduling job submission is displayed.
When the ETL runs as scheduled, it collects data from the source and transfers it to the database.
Verify that the ETL ran successfully and check whether the k8s Prometheus data is refreshed in the Workspace.
To verify whether the ETL ran successfully:
- In the console, click Administration > ETL and System Tasks > ETL tasks.
- In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
- In the console, click Workspace.
- Expand (Domain name) > Systems > k8s Prometheus > Instances.
- In the left pane, verify that the hierarchy displays the new and updated Prometheus instances.
- Click a k8s Prometheus entity, and click the Metrics tab in the right pane.
- Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.
k8s Prometheus Workspace Entity | Details | |
---|---|---|
Entities | ||
Hierarchy | ||
Configuration and Performance metrics mapping | For a detailed mapping between Prometheus API queries and each metric, check out here(exclude for Derived metrics nor Kubernetes API) | |
Derived Metrics | ||
Lookup Field Considerations | ||
Tag Mapping (Optional) | The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” supports Kubernetes Labels as Tags inBMC Helix Continuous Optimization. |
k8s Heapster to k8s Prometheus Migration
The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” supports a seamless transition from entities and metrics imported by the “Moviri Integrator forBMC Helix Continuous Optimization – k8s Heapster”. Please follow these steps to migrate between the two integrators:
Pod Optimization - Pod Workloads replace Pods
The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” introduce a new entity "pod workload" from v20.02.01. Pod Workload is an aggregated entity that aggregate a group of pods that are running on the same controller. pod workload is the direct child of the controller that the pods are running on. Pod workload will use the same name as the parent controller. Pods at the same time will be dropped.
Common Issues
Error Messages / Behaviors | Cause | Solve | |
---|---|---|---|
Query erros HTTP 422 Unprocessable Entity | You will see these errors shows sometimes. The number of this error messages can vary a lot for each run. This is usually caused by Prometheus rebuilding or restarting. Right after the Prometheus's rebuilding or reloading, there are couple of days you will see this error showing. They usually goes away organically as the Prometheus running more stable. | They usually goes away organically as the Prometheus running more stable. | |
Prometheus is running fine but no data is pulled | This usually caused by the last counter is set too far from today's date. Prometheus has a data retention periods which has a default value 15 days, and it can be configured. If the ETL is set to extracting data passed the data retention period, there's not gonna be any data. Prometheus's status page will show the data retention value in "storage retention" field. | Modify the default last counter to a more recent date. | |
504 Gateway Timeouts | These 504 Timeout query error (server didn't respondd in time) is related to the route timeout is being used on Openshift. This can be configured on a route-to-route basis. For example, the Prometheus route can be increased to the 2min timeout that is also configured on the Prometheus backend. Please follow this link to understand what is the configured timeout and how can it be increased | Increase timeout period from Openshift side |
Data Verification
The following sections provide some indications on how to verify on Prometheus if all the pre-requisites are in place before starting collecting data
Verify Prometheus Build information
Verify the Prometheus "Last successful configuration reload" (from Prometheus UI, check "Status > Runtime & Build Information")
If the "Last successful configuration reload" is reporting less then 3 days, ask the customer to evaluate the status of the integration in the next 2/3 days
Verify the Status of Prometheus Target Services
Verify the status of Prometheus Target (from the Prometheus UI, check "Status > Targets"
- Check the status of "node-exporter" (there should be 1 instance running for each node in the cluster)
- Check the status of "kube-state-metrics" (there should be at least 1 instance running)
- Check the status of "kubelet" (there should be at least 1 instance running for each node in the cluster)
Verify data availability in Prometheus Tables
Verify if the following Prometheus tables contain data (from the Prometheus Ul)
- "kube_pod_container_info" when missing Pod Workload, Controller, Namespace (but also Cluster and Node for Requests and Limits metrics)
- "kube_node info" when missing Node and Cluster metrics.
Comments
Log in or register to comment.