Moviri – K8s (Kubernetes) Prometheus Extractor
“Moviri Integrator for BMC Helix Continuous Optimization – k8s (Kubernetes) Prometheus” is an additional component of BMC BMC Helix Continuous Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments. Relevant capacity metrics are loaded into BMC BMC Helix Continuous Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.
The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore, the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.
The documentation is targeted at BMC BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC BMC Helix Continuous Optimization and Kubernetes.
Step I. Complete the preconfiguration tasks
Step IV. Verify the data collection
k8s Heapster to k8s Prometheus Migration
Pod Optimization - Pod workloads replace pods
Step I. Complete the preconfiguration tasks
Step II. Configure the ETL
A. Configuring the basic properties
Some of the basic properties display default values. You can modify these values if required.
To configure the basic properties:
- In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
- On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection
On the Run Configuration tab, select Moviri - k8s Prometheus Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.
- Click the Entity catalog tab, and select one of the following options:
- Shared Entity Catalog:
- From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
- Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
- Shared Entity Catalog:
Click the Connection tab, and configure the following properties:
6. Click the Prometheus Extraction tab, and configure the following properties:
The following image shows a Run Configuration example for the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus":
7. (Optional) Enable TLS v1.2 for Kubernetes 1.16 and above
If you are using Kubernetes 1.16, or Openshift 4 and above, there's an incompatibility between Java and TLS v1.3. We are providing a workaround to use TLSv 1.2 for connection. To add the hidden property:
a. On ETL configuration page's very bottom, there's link to mamually modifying the ETL property:
b. On the manually editing property page, add the property: "prometheus.use.tlsv12"
c. Set the value as "true", and save the change
8. (Optional) Override the default values of the properties:
(Optional) B. Configuring the advanced properties
You can configure the advanced properties to change the way the ETL works or to collect additional metrics.
To configure the advanced properties:
- On the Add ETL page, click Advanced.
Configure the following properties:
The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.
The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.
3.Click Save.The ETL tasks page shows the details of the newly configured Prometheus ETL:
Step III. Run the ETL
After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:
A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.
B. Production mode: Collects data from the data source.
A. Running the ETL in the simulation mode
To run the ETL in the simulation mode:
- In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
- On the ETL tasks page, click the ETL. The ETL details are displayed.
- In the Run configurations table, click Edit
to modify the ETL configuration settings.
- On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
- Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
- On the ETL tasks page, check the ETL run status in the Last exit column.
OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode. - If the ETL run status is Warning, Error, or Failed:
- On the ETL tasks page, click
in the last column of the ETL name row.
- Check the log and reconfigure the ETL if required.
- Run the ETL again.
- Repeat these steps until the ETL run status changes to OK.
- On the ETL tasks page, click
B. Running the ETL in the production mode
You can run the ETL manually when required or schedule it to run at a specified time.
Running the ETL manually
- On the ETL tasks page, click the ETL. The ETL details are displayed.
- In the Run configurations table, click Edit
to modify the ETL configuration settings. The Edit run configuration page is displayed.
- On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
- To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
When the ETL is run, it collects data from the source and transfers it to the database.
Scheduling the ETL run
By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.
To configure the ETL run schedule:
- On the ETL tasks page, click the ETL, and click Edit Task
. The ETL details are displayed.
- On the Edit task page, do the following, and click Save:
- Specify a unique name and description for the ETL task.
- In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
- Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
- Select the task group and the scheduler to which you want to assign the ETL task.
- Click Schedule. A message confirming the scheduling job submission is displayed.
When the ETL runs as scheduled, it collects data from the source and transfers it to the database.
Step IV. Verify data collection
Verify that the ETL ran successfully and check whether the k8s Prometheus data is refreshed in the Workspace.
To verify whether the ETL ran successfully:
- In the console, click Administration > ETL and System Tasks > ETL tasks.
- In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
To verify that the k8s Prometheus data is refreshed:
- In the console, click Workspace.
- Expand (Domain name) > Systems > k8s Prometheus > Instances.
- In the left pane, verify that the hierarchy displays the new and updated Prometheus instances.
- Click a k8s Prometheus entity, and click the Metrics tab in the right pane.
- Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.
k8s Heapster to k8s Prometheus Migration
The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” supports a seamless transition from entities and metrics imported by the “Moviri Integrator forBMC Helix Continuous Optimization – k8s Heapster”. Please follow these steps to migrate between the two integrators:
Pod Optimization - Pod Workloads replace Pods
The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” introduce a new entity "pod workload" from v20.02.01. Pod Workload is an aggregated entity that aggregate a group of pods that are running on the same controller. pod workload is the direct child of the controller that the pods are running on. Pod workload will use the same name as the parent controller. Pods at the same time will be dropped.
Common Issues
Error Messages / Behaviors | Cause | Solve | |
---|---|---|---|
Query erros HTTP 422 Unprocessable Entity | You will see these errors shows sometimes. The number of this error messages can vary a lot for each run. This is usually caused by Prometheus rebuilding or restarting. Right after the Prometheus's rebuilding or reloading, there are couple of days you will see this error showing. They usually goes away organically as the Prometheus running more stable. | They usually goes away organically as the Prometheus running more stable. | |
Prometheus is running fine but no data is pulled | This usually caused by the last counter is set too far from today's date. Prometheus has a data retention periods which has a default value 15 days, and it can be configured. If the ETL is set to extracting data passed the data retention period, there's not gonna be any data. Prometheus's status page will show the data retention value in "storage retention" field. | Modify the default last counter to a more recent date. | |
504 Gateway Timeouts | These 504 Timeout query error (server didn't respondd in time) is related to the route timeout is being used on Openshift. This can be configured on a route-to-route basis. For example, the Prometheus route can be increased to the 2min timeout that is also configured on the Prometheus backend. Please follow this link to understand what is the configured timeout and how can it be increased | Increase timeout period from Openshift side |
Data Verification
The following sections provide some indications on how to verify on Prometheus if all the pre-requisites are in place before starting collecting data
Verify Prometheus Build information
Verify the Prometheus "Last successful configuration reload" (from Prometheus UI, check "Status > Runtime & Build Information")
If the "Last successful configuration reload" is reporting less then 3 days, ask the customer to evaluate the status of the integration in the next 2/3 days
Verify the Status of Prometheus Target Services
Verify the status of Prometheus Target (from the Prometheus UI, check "Status > Targets"
- Check the status of "node-exporter" (there should be 1 instance running for each node in the cluster)
- Check the status of "kube-state-metrics" (there should be at least 1 instance running)
- Check the status of "kubelet" (there should be at least 1 instance running for each node in the cluster)
Verify data availability in Prometheus Tables
Verify if the following Prometheus tables contain data (from the Prometheus Ul)
- "kube_pod_container_info" when missing Pod Workload, Controller, Namespace (but also Cluster and Node for Requests and Limits metrics)
- "kube_node info" when missing Node and Cluster metrics.