Moviri Integrator for BMC Helix Capacity Optimization – k8s (Kubernetes) Prometheus
“Moviri Integrator for BMC Helix Continuous Optimization – k8s (Kubernetes) Prometheus” is an additional component of BMC Helix Continuous Optimization product. It allows extracting data from the Kubernetes cluster management system, a leading solution to manage cloud-native containerized environments. Relevant capacity metrics are loaded into BMC Helix Continuous Optimization, which provides advanced analytics over the extracted data in the form of an interactive dashboard, the Kubernetes View.
The integration supports the extraction of both performance and configuration data across different component of the Kubernetes system and can be configured via parameters that allow entity filtering and many other settings. Furthermore, the connector is able to replicate relationships and logical dependencies among entities such as clusters, nodes, namespaces, deployments, and pods.
The documentation is targeted at BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC Helix Continuous Optimization and Kubernetes.
Step I. Complete the preconfiguration tasks
Steps | Details |
---|---|
Check the required API version is supported | |
Check supported versions of data source configuration | |
Generate access to Prometheus API | The access to Prometheus API depends on the Kubernetes distribution and the Prometheus configuration. The following sections describe the standard procedure for the supported platforms. |
Verify Prometheus API access | |
Generate Access to Kubernetes API (Optional) | Access to Kubernetes API is optional, but highly recommended. Follow the following steps for Kubernetes API access on Openshift and GKE: |
Verify Kubernetes API access | To verify, execute the curl Linux command from one of the BMC Helix Continuous Optimization servers: |
A. Configuring the basic properties
Some of the basic properties display default values. You can modify these values if required.
To configure the basic properties:
- In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection
On the Run Configuration tab, select Moviri - k8s Prometheus Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.
- Click the Entity catalog tab, and select one of the following options:
Shared Entity Catalog:
- From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
- Private Entity Catalog: Select if this is the only ETL that extracts data from the k8s Prometheus resources.
Click the Connection tab, and configure the following properties:
Property Name | Value Type | Required? | Default | Description | |
Prometheus – API URL | String | Yes | Prometheus API URL (http/https://hostname:port). Port can be emitted. | ||
Prometheus – API Version | String | Yes | v1 | Prometheus API version, this should be the same as the Kubernetes API version if using any. | |
Prometheus – API Authentication Method | String | Yes | Prometheus API authentication method. There are three methods that are supported: Authentication Token (Bearer), Basic Authentication (username/password) and None (no authentication). | ||
Prometheus – Username | String | No | Prometheus API username if the Authentication method is set to Basic Authentication. | ||
Prometheus – Password | String | No | Prometheus API password if the Authentication method is set to Basic Authentication. | ||
Prometheus – API Authentication Token | String | No | Prometheus API Authentication Token (Bearer Token) if the Authentication method is set to Authentication Token. | ||
Prometheus – Use Proxy Server | Boolean | No | If a proxy server is used when chose either Basic Authentication or None. Proxy sever supports HTTP. Proxy server only support Basic Authentication and None Authentication. | ||
Prometheus - Proxy Server Host | String | No | Proxy server host name. | ||
Prometheus - Proxy Server Port | Number | No | Proxy server port. Default 8080. | ||
Prometheus - Proxy Username | String | No | Proxy server username | ||
Prometheus - Proxy Password | String | No | Proxy server password | ||
Use Kubernetes API | Boolean | Yes | If use Kubernetes API or not. | ||
Kubernetes Host | String | Yes |
| Kubernetes API server host name For Openshift, use the Openshift console FQDN (e.g., console.ose.bmc.com). | |
Kubernetes API Port | Number | Yes |
| Kubernetes API server port For Openshift, use the same port as the console (typically 8443). | |
Kubernetes API Protocol | String | Yes | HTTPS | Kubernetes API protocol, "HTTPS" in most cases | |
Kubernetes Authentication Token | String | Yes |
| Token of the integrator service account (see data source configuration section). |
6. Click the Prometheus Extraction tab, and configure the following properties:
Property Name | Value Type | Required | Default | Description | |
Data Resolution | String | Yes | Data resolution for data to be pulled from Prometheus into TSCO. Default is set to 5 minutes. any value less than 5 minutes will be set to default 5 minutes. | ||
Cluster Name | String | No | If Kubernetes API is not in use, cluster name must be specified. | ||
Default Last Counter | String | Yes | Default earliest time the connector should be pulling data from in UTC. Format as YYYY-MM-DDTHH24:MI:SS.SSSZ, for example, 2019-01-01T19:00:00.000Z. | ||
Maximum Hours to Extract | String | No | 24 | Maximum hours the connector can pull data from. If leave empty, using default 24 hours from default last counter. | |
Extract POD metrics | Boolean | No | Yes | If Extract POD metrics. Default as “YES”, extract all POD metrics. | |
Select only PODs on the following nodes | String | No |
| Extracts information only for the pods that are currently running on the specified nodes. Multiple nodes name can be separated by semicolon (;). | |
Select only PODs on the following namespaces | String | No |
| Extracts information only for the pods that are currently running in the specified namespaces. Multiple namespaces name can be separated by semicolon (;). | |
Select only PODs on the following controllers | String | No |
| Extracts information only for the pods that are currently running in the specified deployment and namespace tuples. Namespace and controller tuples format as namespace following colon (:) and deployment, for example: namespace-01:api-deployment will select pods under api-deployment only on namespace-01. If namespace is not specified, all pods under this controller in all the namespaces will be selected. If namespace is emitted, start with colon (:) and follows deployment name. For example, :api-deployment will select pods under api-deployment on all namespaces in this cluster. Multiple namespace controller tuple can be separated by semicolon (;). | |
Select only PODs with the following tags | String | No | Extracts information only for the pods that are currently running with the specified tag type (label key) namespace tuples. Namespace and tag type tuples format as namespace following colon (:) and controller , for example: namespace-01:app will select pods with app as tag type only on namespace-01. If namespace is not specified, all pods with this tag type in all the namespaces will be selected. If namespace is emitted, start with colon (:) and follows tag type. For example, :app will select pods with tag type app on all namespaces in this cluster. Multiple namespace label tuple can be separated by semicolon (;). | ||
Exclude PODs on the following nodes | String | No |
| Does not extract information for the pods that are currently running on the specified nodes. Multiple deployments name can be separated by semicolon (;). | |
Exclude PODs on the following namespaces | String | No |
| Does not extract information for the pods that are currently running in the specified namespaces. Multiple deployments name can be separated by semicolon (;). | |
Exclude PODs on the following controllers | String | No |
| Does not extract information for the pods that are currently running in the specified deployment and namespace tuple. Namespace and controller tuples format as namespace following colon (:) and controller , for example: namespace-01:api-deployment will exclude pods under api-deployment only on namespace-01. If namespace is not specified, all pods under this controller in all the namespaces will be excluded. If namespace is emitted, start with colon (:) and follows controller name. For example, :api-deployment will exclude pods under api-deployment on all namespaces in this cluster. Multiple namespace controller tuple can be separated by semicolon (;). | |
Exclude PODs with the following tags | String | No | Does not extract information only for the pods that are currently running with the specified tag type (label key) namespace tuples. Namespace and tag type tuples format as namespace following colon (:) and deployment, for example: namespace-01:app will exclude pods with app as tag type only on namespace-01. If namespace is not specified, all pods with this tag type in all the namespaces will be excluded. If namespace is emitted, start with colon (:) and follows tag type. For example, :app will exclude pods with tag type app on all namespaces in this cluster. Multiple namespace label tuple can be separated by semicolon (;). | ||
Import only the following tags (semi-colon separated list) | String | No | Import only tag types (Kubernetes Label key). Specify the keys of the labels. They can be in the original format appears on Kubernetes API, or they can be using underscore (_) as delimiter. For example, node_role_kubernetes_io_compute and node-role.kubernetes.io/compute are equivalent and will be imported as node_role_kubernetes_io_compute. |
The following image shows a Run Configuration example for the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus":
7. (Optional) Override the default values of the properties:
(Optional) B. Configuring the advanced properties
You can configure the advanced properties to change the way the ETL works or to collect additional metrics.
To configure the advanced properties:
- On the Add ETL page, click Advanced.
Configure the following properties:
3.Click Save.The ETL tasks page shows the details of the newly configured Prometheus ETL:
After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:
A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.
B. Production mode: Collects data from the data source.
A. Running the ETL in the simulation mode
To run the ETL in the simulation mode:
- In the console, navigate to Administration > ETL & System Tasks, and select ETL tasks.
- On the ETL tasks page, click the ETL. The ETL details are displayed.
- In the Run configurations table, click Edit to modify the ETL configuration settings.
- On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
- Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
- On the ETL tasks page, check the ETL run status in the Last exit column.
OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode. - If the ETL run status is Warning, Error, or Failed:
- On the ETL tasks page, click in the last column of the ETL name row.
- Check the log and reconfigure the ETL if required.
- Run the ETL again.
- Repeat these steps until the ETL run status changes to OK.
B. Running the ETL in the production mode
You can run the ETL manually when required or schedule it to run at a specified time.
Running the ETL manually
- On the ETL tasks page, click the ETL. The ETL details are displayed.
- In the Run configurations table, click Edit to modify the ETL configuration settings. The Edit run configuration page is displayed.
- On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
- To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
When the ETL is run, it collects data from the source and transfers it to the database.
Scheduling the ETL run
By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.
To configure the ETL run schedule:
- On the ETL tasks page, click the ETL, and click Edit Task
On the Edit task page, do the following, and click Save:
- Specify a unique name and description for the ETL task.
- In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
- Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
- Select the task group and the scheduler to which you want to assign the ETL task.
Click Schedule. A message confirming the scheduling job submission is displayed.
When the ETL runs as scheduled, it collects data from the source and transfers it to the database.
Verify that the ETL ran successfully and check whether the k8s Prometheus data is refreshed in the Workspace.
To verify whether the ETL ran successfully:
- In the console, click Administration > ETL and System Tasks > ETL tasks.
- In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
- In the console, click Workspace.
- Expand (Domain name) > Systems > k8s Prometheus > Instances.
- In the left pane, verify that the hierarchy displays the new and updated Prometheus instances.
- Click a k8s Prometheus entity, and click the Metrics tab in the right pane.
- Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.
k8s Prometheus Workspace Entity | Details | |
---|---|---|
Entities | ||
Hierarchy | ||
Configuration and Performance metrics mapping | ||
Derived Metrics | ||
Lookup Field Considerations | ||
Tag Mapping (Optional) | The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” supports Kubernetes Labels as Tags in BMC Helix Continuous Optimization. |
k8s Heapster to k8s Prometheus Migration
The “Moviri Integrator for BMC Helix Continuous Optimization – k8s Prometheus” supports a seamless transition from entities and metrics imported by the “Moviri Integrator for BMC Helix Continuous Optimization – k8s Heapster”. Please follow these steps to migrate between the two integrators:
Comments
Log in or register to comment.