Moviri - Dynatrace Extractor


The integration supports the extraction of both performance and configuration."Moviri Integrator for BMC Helix Continuous Optimization – Dynatrace" is an additional component of BMC Helix Continuous Optimization product. It allows extracting data from Dynatrace.  Relevant capacity metrics are loaded intoBMC Helix Continuous Optimization, which provides advanced analytics over the extracted data.

The documentation is targeted at BMC BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC BMC Helix Continuous Optimization and Dynatrace.

If you used the Moviri Integrator for BMC Helix Continuous Optimization – Dynatrace before, the hierarchy will be changed in this version 20.02.01, and some entities will be gone and left at "All systems and business drivers → Unassigned".

This version of the  Moviri Integrator for  – BMC Helix Continuous Optimization Dynatrace uses Metrics v2 endpoint for data extraction, which is different than the previous version. Check the "Verify Dynatrace Data" section for detailed metrics.

Collecting data by using the Dynatrace

To collect data by using the Dynatrace ETL, do the following tasks:

I. Dynatrace Prerequisites.

II. Configure Dynatrace ETL.


Step I. Complete the preconfiguration tasks



Step II. Configure the ETL

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection
  3. On the Run Configuration tab, select Moviri - Dynatrace Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.

    image2020-8-20_0-42-6.png
  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog:
      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if this is the only ETL that extracts data from the Dynatrace resources.
  5. Click the Dynatrace - Connection Parameters tab, and configure the following properties:

    The [confluence_table-plus] macro is a standalone macro and it cannot be used inline. Click on this message for details.

Click the Dynatrace - Extraction tab, and configure the following properties:

  1. Click the Dynatrace - Filter tab, and configure the following properties:

    The [confluence_table-plus] macro is a standalone macro and it cannot be used inline. Click on this message for details.

The following image shows sample configuration values for the basic properties.

image2021-6-9_10-42-10.png

  1. (Optional) Override the default values of properties in the following tabs:

    The [confluence_table-plus] macro is a standalone macro and it cannot be used inline. Click on this message for details.

  2. Click Save.
    The ETL tasks page shows the details of the newly configured Dynatrace ETL.

    image2020-8-20_0-44-21.png

(Optional) B. Configuring the advanced properties

You can configure the advanced properties to change the way the ETL works or to collect additional metrics.

To configure the advanced properties:

  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

    The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.

    The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.

    The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.

    The [expand] macro is a standalone macro and it cannot be used inline. Click on this message for details.

    There is an addition property, extract.dynatrace.virtualNode, which enables virtual machines to be identified and labeled as such. They will be using the virtual node namespace as well.


  3. Click Save.
    The ETL tasks page shows the details of the newly configured Dynatrace ETL.


Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL.

    aws_api_etl_configured.png

  3. In the Run configurations table, click Edit edit icon.png to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click edit icon.png in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit edit icon.png to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit.

    aws_api_etl_schedule_run.png
  2. On the Edit task page, do the following, and click Save:
    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group and the scheduler to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the database.


Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the Dynatrace data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.

To verify that the Dynatrace data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > Dynatrace > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Dynatrace instances.
  4. Click a Dynatrace entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.


 Custom Metrics

The Dynatrace extractor now is supporting custom metrics on Applications, Services and Hosts. By configuring the JSON file   

Custom Metrics

The following table shows the JSON object and the property mapping. One JSON Object is configured for one metrics for one type of entity. The JSON configuration file contains a list of JSON objects. covering multiple metrics.


JSON Entity

Required

Description

tscoEntityType

Yes

The BMC Helix Continuous Optimization entity type. Currently only applications, services and hosts are supported. Required values ["APPLICATION", "SERVICE", "HOST"]. You can find the entity type by checking out the hierarchy and entity types above.

dynatraceEntityType

Yes

The dynatrace entities. Currently only applications (including applications, custom applications and mobile applications), services and hosts. Required values ["APPLICATION", "SERVICE", "HOST"]. You can find the Dynatrace entity type by looking at the UUID of the targeted entity. For example, if an entity has id APPLICATION-588BA26653BA3FA8, use "Application" as the value of this field. If it's CUSTOM_APPLICATION-299F4619269E4F32 or MOBILE_APPLICATION-752C288D59734C79, you can put in "Application" in this field as well. Make sure this field matches the "tscoEntityType". If not, the entity type in "tscoEntityType" will be used for this field.

dynatraceMetric

Yes

The custom Dyantrace metric. This must not be a metric already imported by the ETL, which are listed in the Metrics Mapping section.

dynatraceAggregation

No

The Dynatrace aggregation types for the metrics. Currently the extractor will only process auto, min, max, average and count as multiple statistics. Make sure to separate each of them with comma ",". If no value is imported, auto will be used.

tscoMetric

Yes

The BMC Helix Continuous Optimization metric that the custom Dynatrace metric is mapping to. You can find the list of all available metrics at Administration → Data Warehouse → Datasets & metrics

dataset

Yes

The dataset of the BMC Helix Continuous Optimization metric. Currently supports "SYS" for sysdat (system metrics), WKLD for wkldat (workload metrics).

metricType

Yes

The type of the BMC Helix Continuous Optimization metric, either "Perf" for performance metris, or "Conf" for configuration metrics

entityIndex

No

The index of the entity dimension maps from Dynatrace API. If this metrics only have 1 dimension (in other word, this metric will be a global metric, rather than a submetric, the index is always should be 0). If leave empty, 0 will be used. For example, a service has a metric, and the service will be imported as entity. Now looking at the metric description from Dynatrace API, the dimension object looks like "dimensions":["SERVICE-299F4619269E4F32"], the we use 0 for entityIndex to import this service,

subMetricIndex

No

The field should be filled up if the values are for entities and its child entities and the submetrics is used. For example, a host's each CPU has its own temperature, and you want to import the temperature for each CPU on the host as BYCPU_TEMPERATURE. HOST will be the entity and CPU name is the sub entity. Now looking at the metric description from Dynatrace API, the dimensions object looks like this "dimensions":["HOST-299F4619269E4F32", "CPU-752C288D59734C79"] then we put entityIndex as 0 for host, and subMetricIndex for 1 for CPU. Then the host HOST-299F4619269E4F32 will be imported as entity, with subresource CPU-752C288D59734C79. Defualt is to leave this field empty if the metric is a global metric.

scaleFactor

No

The scale factor applied to the value as multiplication. If leave empty, there won't be changes for the value. For example, the value is 500, and the scaleFactor is 0.01, the value imported in BMC Helix Continuous Optimization will be 5 as 500*0.01=5.

Here is an example:

Let's say there is a custom metric called web.total.user.exit and it's defined on some services, with the values in gauge format with the multiple statistic summaries (auto, avg, min, max and count).

And this metric web.total.user.exit will be mapped to BMC Helix Continuous Optimization global metric "WEB_TOTAL_USER_EXITS", which is a performance metric in workload dataset.

First, let's check if this custom metrics are created correctly by calling Dynatrace API, by using the following format:

https://<FQDN>/api/v2/metrics/query?metricSelector=<custom metric id>:(<aggregation types>)&entitySelector=type(<entity type>)&api-token=<token>&from=<starting timestamp>&to=<ending timestamp>&resolution=<resolution>

In this case, it will be: https://live.dynatrace.com/api/v2/metrics/query?metricSelector=web.total.user.exit:(auto,avg,min,max,count)&entitySelector=type(SERVICE)&api-token=token1234567&from=1620931200000&to=1620936300000&resolution=15m

Then I got a results like this:

image2021-6-10_16-3-16.png

By verifying the custom metrics from Dynatrace API, now it's time to create the JSON Object for this metric web.total.user.exit

example

explanation

{
   "tscoEntityType": "SERVICE",
   "dynatraceMetric": "web.total.user.exit",
   "dynatraceAggregation": "auto,min",
   "submetricIndex": "",
   "dataset": "WKLD",
   "metricType": "Perf",
   "tscoMetric": "WEB_TOTAL_USER_EXITS",
   "dynatraceEntityType": "SERVICE",
   "scaleFactor": "1",
   "entityIndex": "0"
}



{
 "tscoEntityType": the entity that has this metric is a service
 "dynatraceMetric": dynatrace metric id
 "dynatraceAggregation": aggregation type
 "submetricIndex": leave this field empty because this is a global metrics
 "dataset": workload dataset
 "metricType": performance metric
 "tscoMetric": BMC Helix Continuous Optimization metric name
 "dynatraceEntityType": BMC Helix Continuous Optimization entity type that will have this metrics
 "scaleFactor": leave it as 1 since there's no transformation on how
 "entityIndex": set to 0 or leave it to empty because of there's only one kind of entity in dimension map
}


[

{

 "tscoEntityType": "SERVICE",
   "dynatraceMetric": "web.total.user.exit",
   "dynatraceAggregation": "auto,min",
   "submetricIndex": "",
   "dataset": "WKLD",
   "metricType": "Perf",
   "tscoMetric": "WEB_TOTAL_USER_EXITS",
   "dynatraceEntityType": "SERVICE",
   "scaleFactor": "1",
   "entityIndex": "0"

     },

{

.................... other metrics

}

]

After the extraction, the metrics on service SERVICE-9C13B59F60731573 appears on workspace as

image2021-6-10_17-17-46.png

This is a link to the sample JSON file here.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*