Moviri - Dynatrace Extractor


The integration supports the extraction of both performance and configuration."Moviri Integrator for BMC Helix Continuous Optimization – Dynatrace" is an additional component of BMC Helix Continuous Optimization product. It allows extracting data from Dynatrace.  Relevant capacity metrics are loaded intoBMC Helix Continuous Optimization, which provides advanced analytics over the extracted data.

The documentation is targeted at BMC BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC BMC Helix Continuous Optimization and Dynatrace.

Information

If you used the Moviri Integrator for BMC Helix Continuous Optimization – Dynatrace before, the hierarchy will be changed in this version 20.02.01, and some entities will be gone and left at "All systems and business drivers → Unassigned".

This version of the  Moviri Integrator for  – BMC Helix Continuous Optimization Dynatrace uses Metrics v2 endpoint for data extraction, which is different than the previous version. Check the "Verify Dynatrace Data" section for detailed metrics.

Collecting data by using the Dynatrace

To collect data by using the Dynatrace ETL, do the following tasks:

I. Dynatrace Prerequisites.

II. Configure Dynatrace ETL.

 

Step I. Complete the preconfiguration tasks

 

Step

Details

Check that the Dynatrace version is supported

  1. Dynatrace SaaS/Managed
  2. API version V1, scope: Access problem and event feed, metrics, and topology 
    1. Smartscape API - 
  3. API version V2, scope: Read metrics, Read Entities(Optional)
    1. Metric API
    2. (Optional) Monitored Entities API 

Generate an authorization token to access Dynatrace API.

Generate Token

To generate the token follow these steps:

  1. Login to the Dynatrace environment
  2. From the left bar select “Manage”
  3. Expand the “Manage” menu
  4. Select “Access Tokens”
  5. Click "Generate access token" button
  6. Search the scopes and select the following:
    1. “Access problem and event feed, metrics, and topology” - API V1
    2. "Read metrics" - API V2
    3. (Optional) "Read entities" - API V2
  7. Give a name to the token and then click “Generate Token”
  8. Copy the generated token and use it in the ETL configuration.

Verify if the token is working

Verify Token

Execute the following command from a linux console:

  • curl -H "Authorization: Api-Token <token>" https://<dyantrace-fqdn>/api/v1/config/clusterversion


An output similar to the following should be obtained:

{"version": "1.137.79.20180123-105448"}

 

Step II. Configure the ETL

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection
  3. On the Run Configuration tab, select Moviri - Dynatrace Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.

    image2020-8-20_0-42-6.png
  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog:
      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if this is the only ETL that extracts data from the Dynatrace resources.
  5. Click the Dynatrace - Connection Parameters tab, and configure the following properties:

    Property

    Description

    API Version

    Choose between v1 (SmartScape) or v2 (Monitored Entities)

    Dynatrace URL prefix 

    Dynatrace URL.
    The URL must be in the form "https://{id}.live.dynatrace.com"

    Dynatrace API Token

    Dynatrace API Token (Generated in the Prerequisite)

    Semi-colon separated list of HTTP Headers (<header>:"<value>")

    Extra Http Header key-valye pair used for accessing Dynatrace API (semi-colon separate list for multiple HTTP Headers)

    Use HTTP Proxy to connect to Dynatrace

    Select yes if a HTTP proxy is required in order to connect to Dynatrace API

    Use HTTPS

    Whether or not use HTTPS to connect to the HTTP(S) proxy.

    HTTP Proxy Address

    HTTP Proxy FQDN or IP Address

    HTTP Proxy Port

    HTTP(S) proxy server port

    HTTP Proxy Username

    HTTP(S) proxy server username

    HTTP Proxy Password

    HTTP(S) proxy server password

Click the Dynatrace - Extraction tab, and configure the following properties:

Property

Description

Default Lastcounter (YYYY-MM-DD HH24:MI:SS)

Initial timestamp from which extract data.

Max Extraction Period (hours)

Specify the max extraction period in hours (default is 24 hours / 1 day).

Entities batch size

Specify how many entities is extracted at the same time from Dynatrace.

Timeout seconds

Specify timeout seconds for Dynatrace API connection

Waiting period in seconds when hit API limits

Specify waiting period when hit Dynatrace API number of request per minute limit

Data Resolution

Select at which resolution data will be imported. Possible values are:

  • 5 Minutes
  • 15 Minutes
  • 30 Minutes
  • 1 Hour

Extraction time chunk size (hour), 0 for no limitation

Specify how many hours of data can be extracted once (default is 0, no limitation, extracting all data from last counter)

ETL waits when API calls reaches this number

Specify the safeguard threshold for the 'X-RateLimit-Remaining'. The ETL will wait for the next time slot available (or up to the Timeout seconds) when the 'X-RateLimit-Remaining' header will reach this value. More information at (https://www.dynatrace.com/support/help/dynatrace-api/basics/access-limit/)

Page Size

Specify page size of each Dynatrace response, page size represents how many data entries returned from Dynatrace API

Lag hour to now as end timestamp

If ending timestamp of extraction period is close to current timestamp, specify lag hours from the current timestamp for Dynatrace Metric API

Create 'Default Business Service' domain?

Select if the Dynatrace ETL should be configured to support the Business Service View.

Tag Type for Service Pools?

Specify which Tag Type should be used to import Service Pools, to be used in the Business Service View (default AppTier).

Alternative name for 'Default Business Service' domain

Alternative name for default business service view

Extract Dynatrace Tags?

Select if the Dynatrace ETL should import of Dynatrace Tag

Enable Default Tag for Dynatrace tags without a key/value pair?

Select if the Dynatrace ETL is able to import Dynatrace Tag without a Key Value. Not all the Dynatrace Tag are available in the format of key:value (imported as TAGTYPE:TAG). This property enables the possibility to import Dynatrace Tag configured with just a value

Default tagtype

TAGTYPE to use for Dynatrace Tag without a key

Extract Application Tags?

Select if the Dynatrace ETL should import Dynatrace Tag defined at the Application level

Allowlist Application Tags (Leave empty to extract ALL tags; semicolon separated)

Allowlist of Dynatrace Tag to be imported at the Application Level (semi-colon separated)

Extract Service Tags?

Select if the Dynatrace ETL should import Dynatrace Tag defined at the Service level

Allowlist Service Tags (Leave empty to extract ALL tags; semicolon separated)

Allowlist of Dynatrace Tag to be imported at the Service Level (semi-colon separated)

Extract Host Tags?

Select if the Dynatrace ETL should import Dynatrace Tag defined at the Host level

Allowlist Host Tags (Leave empty to extract ALL tags; semicolon separated)

Allowlist of Dynatrace Tag to be imported at the Host Level (semi-colon separated)

Host Lookup Information

This property allow to define which value should be used as Strong Lookup Field

  • Dynatrace EntityId (DYNAUUID)
  • Hostname (HOSTNAME)
  • Dynatrace EntityId/Hostname (DYNAUUID##HOSTNAME)

Import WEB_ACTIVE_SESSIONS or USERS_CURRENT as TOTAL_EVENTS for Business Service View

Choose to import TOTAL_EVENTS data mapped to WEB_ACTIVE_SESSIONS metric or USERS_CURRENT metric.

  1. Click the Dynatrace - Filter tab, and configure the following properties:

    Property

    Description

    Collect infrastructure metrics

    If selected, the connector will collect metrics at host level (VM, Generic)

    Import indirect services for each applications

    If selected, the indirect services will be imported

    Skip Process Group aggregation

    If selected, the processes data won't be aggregated to Services

    Extract service key request metrics

    If Selected, the key request metrics (BYSET_* Business Driver Metrics) will be imported

    Extract all hosts

    If Selected, all the hosts (even the one not assigned to any application/service) will be imported. Host not assigned to any application will be available in the "Infrastructure" domain

    Extract all services

    If Selected, all the services (even the one not assigned to any application) will be imported. Services are not assigned to any application will be available in the "Services" domain

    Extract BYFS_ metrics for hosts?

    If Selected, it will import byfs_metrics, which results in a very slow execution since it's not aggregated. Default is "No"

    If Selected, the Disk's UUID (format DISK-123456789) will be used as sub object names. If want to use disk name (path), manually add a property "extract.dynatrace.disk.name" and set to "true". Extracting Disk names will require to use API V2 Read Entities scope. Please refer to "Generate the token" section above. Using disk name (path) will also results in a slow execution. 

    Extract network interface metrics for hosts?

    If Selected, it will extract network interfaces from the API and create a map

    Change entitytype based on Cloud type?

    Change entity type based on cloud type, limited to aws, gcp, azure, and oracle. Openstack or other vm platform type will be put as "generic".

    Custom metrics JSON file location

    The location of where the custom metrics configuration JSON file. If not want to use the custom metrics, leave this field empty.

    Application Allowlist  (; separated)

    Semicolon separated list of Java Regular Expression that will be used to identify the applications to be imported (multiple Regular Expressions will be considered in "OR" clause)

    Application Denylist (; separated)

    Semicolon separated list of Java Regular Expression that will be used to identify the applications not to be imported (multiple Regular Expressions will be considered in "OR" clause)

    Application Allowlist (; separated), support Application ID

    Semicolon separated list of Application ID that will be used to identify the applications to be imported (multiple Regular Expressions will be considered in "OR" clause). This property does not accept Regular Expression, an exact match of the Application ID will be performed.

    Application Denylist (; separated), support Application ID

    Semicolon separated list of Application ID that will be used to identify the applications to be imported (multiple Regular Expressions will be considered in "OR" clause). This property does not accept Regular Expression, an exact match of the Application ID will be performed.

    Application Injection Type Allowlist (; separated)

    Semicolon separated list of Injection Types to be imported. This property does not accept regular expressions, an exact match of Injection Types will be performed.

    Application Injection Type Denylist (; separated)

    Semicolon separated list of Injection Types to not be imported. This property does not accept regular expressions, an exact match of Injection Types will be performed.

    Host Allowlist (; separated)

    Semicolon separated list of Java Regular Expression that will be used to identify the hosts to be imported (multiple Regular Expressions will be considered in "OR" clause)

    Host Denylist (; separated)

    Semicolon separated list of Java Regular Expression that will be used to identify the hosts not to be imported (multiple Regular Expressions will be considered in "OR" clause)

    Host Allowlist (; separated), support Host ID

    Semicolon separated list of Host ID that will be used to identify the applications to be imported (the Regular Expressions will be considered in "OR" statements). This property does not accept Regular Expression, an exact match of the Host ID will be performed.

    Host Denylist (; separated), support Host ID

    Semicolon separated list of Host ID that will be used to identify the applications not to be imported (the Regular Expressions will be considered in "OR" statements). This property does not accept Regular Expression, an exact match of the Host ID will be performed.

    Host Monitoring Mode filter Allowlist (; separated)

    Semicolon separated list of Host Monitoring Modes to be imported. This property does not accept regular expressions, an exact match of the Host Monitoring Modes will be performed.

    Host Monitoring Mode filter Denylist (; separated)

    Semicolon separated list of Host Monitoring Modes to not be imported. This property does not accept regular expressions, an exact match of the Host Monitoring Modes will be performed.

    Host Network Zone filter Allowlist (; separated)

    Semicolon separated list of Host Network Zones to be imported. This property does not accept regular expressions, an exact match of the Host Network Zones will be performed.

    Host Network Zone filter Allowlist (; separated)

    Semicolon separated list of Host Network Zones to not be imported. This property does not accept regular expressions, an exact match of the Host Network Zones will be performed.

    Host Operating System Type filter Allowlist (; separated)

    Semicolon separated list of Host Operating System Types to be imported. This property does not accept regular expressions, an exact match of the Host Operating System Type will be performed.

    Host Operating System Type filter Denylist (; separated)

    Semicolon separated list of Host Operating System Types to not be imported. This property does not accept regular expressions, an exact match of the Host Operating System Type will be performed.

    Host State filter Allowlist (; separated)

    Semicolon separated list of Host States to be imported. This property does not accept regular expressions, an exact match of the Host States will be performed.

    Host State filter Denylist (; separated)

    Semicolon separated list of Host States to not be imported. This property does not accept regular expressions, an exact match of the Host States will be performed.

    Host Hypervisor Type filter Allowlist (; separated)

    Semicolon separated list of Host Hypervisor Types to be imported. This property does not accept regular expressions, an exact match of the Host Hypervisor Types will be performed.

    Host Hypervisor Type filter Denylist (; separated)

    Semicolon separated list of Host Hypervisor Types to not be imported. This property does not accept regular expressions, an exact match of the Host Hypervisor Types will be performed.

    Service Allowlist (; separated)

    Semicolon separated list of Java Regular Expression that will be used to identify the services to be imported (multiple Regular Expressions will be considered in "OR" clause).

    Service Denylist (; separated)

    Semicolon separated list of Java Regular Expression that will be used to identify the services not to be imported (multiple Regular Expressions will be considered in "OR" clause).

    Service Allowlist (; separated), support Service ID

    Semicolon separated list of Service ID that will be used to identify the applications to be imported (the Regular Expressions will be considered in "OR" statements). This property does not accept Regular Expression, an exact match of the Service ID will be performed.

    Service Denylist (; separated), support Service ID

    Semicolon separated list of Service ID that will be used to identify the applications not to be imported (the Regular Expressions will be considered in "OR" statements). This property does not accept Regular Expression, an exact match of the Service ID will be performed.

    Service Software Technology Allowlist (; separated)

    Semicolon separated list of Software Technologies to be imported. This property does not accept regular expression, an exact match of the Software Technologies will be performed.

    Service Software Technology Denylist (; separated)

    Semicolon separated list of Software Technologies to not be imported. This property does not accept regular expression, an exact match of the Software Technologies will be performed.

    Tags for filtering Apps (; separated)

    Semicolon separated list of tags to filter out the applications. Format can be [Context]tag:value or tag:value or tag. All the tags must apply to the in-scope Applications (multiple tags will be considered in "AND" clause)

    Tags for filtering Services (; separated)

    Semicolon separated list of tags to filter out the services. Format can be [Context]tag:value or tag:value or tag. All the tags must apply to the in-scope Service (multiple tags will be considered in "AND" clause)

    Tags for filtering Hosts(; separated)

    Semicolon separated list of tags to filter out the hosts. Format can be [Context]tag:value or tag:value or tag. All the tags must apply to the in-scope Applications (multiple tags will be considered in "AND" clause)

    File Filter option available.
    An option to add Domain name to lookup value to the infrastructure folder, only there when using business services, is available.

The following image shows sample configuration values for the basic properties.

image2021-6-9_10-42-10.png

  1. (Optional) Override the default values of properties in the following tabs:

    Run configuration

    Property

    Description

    Module selection

    Select one of the following options:

    • Based on datasource: This is the default selection.
    • Based on Open ETL template: Select only if you want to collect data that is not supported by BMC Helix Continuous Optimization.

    Module description

    A short description of the ETL module.

    Execute in simulation mode

    By default, the ETL execution in simulation mode is selected to validate connectivity with the data source, and to ensure that the ETL does not have any configuration issues. In the simulation mode, the ETL does not load data into the database. This option is useful when you want to test a new ETL task. To run the ETL in the production mode, select No.
    BMC recommends that you run the ETL in the simulation mode after ETL configuration and then run it in the production mode.

    Object relationships

    Property

    Description

    Associate new entities to

    Specify the domain to which you want to add the entities created by the ETL.

    Select one of the following options:

    • Existing domain: This option is selected by default. Select an existing domain from the Domain list. If the selected domain is already used by other hierarchy rules, select one of the following Domain conflict options:
      • Enrich domain tree: Select to create a new independent hierarchy rule for adding a new set of entities, relations, or both that are not defined by other ETLs.
      • ETL Migration: Select if the new ETL uses the same set of entities, relations, or both that are already defined by other ETLs.
    • New domain: Select a parent domain, and specify a name for your new domain.

    By default, a new domain with the same ETL name is created for each ETL. When the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created in the active state. If you specify a different domain for the ETL, the hierarchy rule is updated automatically.

    ETL task properties

    Property

    Description

    Task group

    Select a task group to classify the ETL.

    Running on scheduler

    Select one of the following schedulers for running the ETL:

    • Primary Scheduler: Runs on the Application Server.
    • Generic Scheduler: Runs on a separate computer.
    • Remote: Runs on remote computers.

    Maximum execution time before warning

    Indicates the number of hours, minutes, or days for which the ETL must run before generating warnings or alerts, if any.

    Frequency

    Select one of the following frequencies to run the ETL:

    • Predefined: This is the default selection. Select a daily, weekly, or monthly frequency, and then select a time to start the ETL run accordingly.
    • Custom: Specify a custom frequency, select an appropriate unit of time, and then specify a day and a time to start the ETL run.
  2. Click Save.
    The ETL tasks page shows the details of the newly configured Dynatrace ETL.

    image2020-8-20_0-44-21.png

(Optional) B. Configuring the advanced properties

You can configure the advanced properties to change the way the ETL works or to collect additional metrics.

To configure the advanced properties:

  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

    Run configuration

    Property

    Description

    Run configuration name

    Specify the name that you want to assign to this ETL task configuration. The default configuration name is displayed. You can use this name to differentiate between the run configuration settings of ETL tasks.

    Deploy status

    Select the deploy status for the ETL task. For example, you can initially select Test and change it to Production after verifying that the ETL run results are as expected.

    Log level

    Specify the level of details that you want to include in the ETL log file. Select one of the following options:

    • 1 - Light: Select to add the bare minimum activity logs to the log file.
    • 5 - Medium: Select to add the medium-detailed activity logs to the log file.
    • 10 - Verbose: Select to add detailed activity logs to the log file.

    Use log level 5 as a general practice. You can select log level 10 for debugging and troubleshooting purposes.

    Datasets

    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    1. Click Apply.

    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list.

    Collection level

    Property

    Description

    Metric profile selection

    Select the metric profile that the ETL must use. The ETL collects data for the group of metrics that is defined by the selected metric profile.

    • Use Global metric profile: This is selected by default. All the out-of-the-box ETLs use this profile.
    • Select a custom metric profile: Select the custom profile that you want to use from the Custom metric profile list. This list displays all the custom profiles that you have created.


    Levels up to

    Specify the metric level that defines the number of metrics that can be imported into the database. The load on the database increases or decreases depending on the selected metric level.

    Loader configuration

    Property

    Description

    Empty dataset behavior

    Specify the action for the loader if it encounters an empty dataset:

    • Warn: Generate a warning about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.

    ETL log file name

    The name of the file that contains the ETL run log. The default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID

    Maximum number of rows for CSV output

    A numeric value to limit the size of the output files.

    CSV loader output file name

    The name of the file that is generated by the CSV loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID

    Capacity Optimization loader output file name

    The name of the file that is generated by the BMC Helix Continuous Optimization Capacity Optimization loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID

    Detail mode

    Specify whether you want to collect raw data in addition to the standard data. Select one of the following options:

    • Standard: Data will be stored in the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw also: Data will be stored in the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw only: Data will be stored in the database in a table only at Raw granularity (as available from the original data source).

    Remove domain suffix from datasource name (Only for systems) 

    Select True to remove the domain from the data source name. For example, server.domain.com will be saved as server. The default selection is False.

    Leave domain suffix to system name (Only for systems)

    Select True to keep the domain in the system name. For example: server.domain.com will be saved as is. The default selection is False.

    Update grouping object definition (Only for systems)

    Select True if you want the ETL to update the grouping object definition for a metric that is loaded by the ETL. The default selection is False.

    Skip entity creation (Only for ETL tasks sharing lookup with other tasks)

    Select True if you do not want this ETL to create an entity and discard data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share a lookup to create a new entity. The default selection is False.

    Scheduling options

    Property

    Description

    Hour mask

    Specify a value to run the task only during particular hours within a day. For example, 0 – 23 or 1, 3, 5 – 12.

    Day of week mask

    Select the days so that the task can be run only on the selected days of the week. To avoid setting this filter, do not select any option for this field.

    Day of month mask

    Specify a value to run the task only on the selected days of a month. For example, 5, 9, 18, 27 – 31.

    Apply mask validation

    Select False to temporarily turn off the mask validation without removing any values. The default selection is True.

    Execute after time

    Specify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task is run. The task run begins only after the specified time is elapsed.

    Enqueueable

    Specify whether you want to ignore the next run command or run it after the current task. Select one of the following options:

    • False: Ignores the next run command when a particular task is already running. This is the default selection.
    • True: Starts the next run command immediately after the current running task is completed.

    There is an addition property, extract.dynatrace.virtualNode, which enables virtual machines to be identified and labeled as such. They will be using the virtual node namespace as well.

    There is an addition property, extract.dynatrace.virtualNode, which enables virtual machines to be identified and labeled as such. They will be using the virtual node namespace as well.

     

  3. Click Save.
    The ETL tasks page shows the details of the newly configured Dynatrace ETL.

 

Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL.

    aws_api_etl_configured.png
  3. In the Run configurations table, click Edit edit icon.png to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click edit icon.png in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit edit icon.png to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit.

    aws_api_etl_schedule_run.png
  2. On the Edit task page, do the following, and click Save:
    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group and the scheduler to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the database.

 

Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the Dynatrace data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.

To verify that the Dynatrace data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > Dynatrace > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Dynatrace instances.
  4. Click a Dynatrace entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.

Dynatrace Workspace Entity

Details

Entities

Shows Entities

BMC Helix Continuous Optimization

 Entities

Dynatrace Entity

Virtual Machine (VMware, Hyper-V) / Generic

Hosts

Application Server Instance

Services of JBoss, Tomcat and JVM

Web Server Instance

Services of Apache HTTPD and Nginx and ASP.NET

Database Server Instance

Services of MySQL, MSSQL, MongoDB and Postgres Databases

Virtual Application

Other Services

Application

Dynatrace Applications

Business Driver

Applications and Serivces

Hierarchy

Shows Hierarchy

Verify that the ETL ran successfully and check whether the Dynatrace data is refreshed in the Workspace.

To verify that the Dynatrace data is refreshed:
In the console, click Workspace.
Expand (Domain name) > Systems > Dynatrace> Instances.
In the left pane, verify that the hierarchy displays the new and updated Dynatrace instances.
Click an Dynatrace entity, and click the Metrics tab in the right pane.
Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.

The detailed hierarchy presented as: Application will have direct children of Hosts(only the hosts that direct services are running on), Services (parent of all the direct services running on this applications), and this application's business driver. Under services, there are the services running directly under the application, as well as the business drivers of these services.

The Service domain (same hierarchical level as the applications ) contains all the services that are not in any applications. Each service has its hosts attached.

The Infrastructure domains contains all the hosts collected by the ETL that is not under any applications or any services.

The priority of the populated hierarchy is:

Application Domain: (Applications → Services and Indirect Services underneath the application → the hosts that the services are runs on) → Service Domain: (Other services that are not belong to any applications → Hosts that the services are runs on) → Infrastructure Domain: (Other hosts that are not belong to any applications or services).

image2020-9-25_9-19-51.pngimage2021-5-11_14-16-25.png

If the Business Service view is used, there's another layer of hierarchy will be created, as the Business service view domain name defined in the ETL run configuration. Meanwhile, the tags presenting the service name will also be imported on each hosts.

image2020-8-20_15-29-15.png

Configuration and Performance metrics mapping

Metrics Mapping


Dyantrace entity

BMC Helix Continuous Optimization

 entity

PERF/CONF

Dynatrace Metric

TSCO Metric

Conversion factor

Business Driver

KeyRequest

Service Business Driver

Perf

builtin:service.keyRequest.count.total

BYSET_EVENTS

1

Service

Service Business Driver

Perf

builtin:service.keyRequest.response.time

BYSET_RESPONSE_TIME

0.000001

Service

Service Business Driver

Perf

builtin:service.response.time

EVENT_RESPONSE_TIME

0.000001

KeyRequest

Service Business Driver

Perf

SUM(builtin:service.keyRequest.count.total)

EVENT_RATE

1.0 / 60.0

Application

Application Business Driver

Perf

builtin:apps.web.activeUsersEst

USERS_CURRENT

1

Application

Application Business Driver

Perf

builtin:apps.web.activeSessions

WEB_ACTIVE_SESSIONS

1

Application

Application Business Driver

Perf

builtin:apps.web.converted

WEB_CONVERSIONS

1

Application

Application Business Driver

Perf

builtin:apps.web.countOfErrors

TOTAL_ERRORS

1

Application

Application Business Driver

Perf

builtin:apps.web.visuallyComplete.load.browser

EVENT_RESPONSE_TIME


dotnet

Service Business Driver

Perf

SUM(builtin:tech.dotnet.threadpool.workerThreads)

THREAD_COUNT

1

JVM

Service Business Driver

Perf

SUM(builtin:tech.jvm.threads.count)

THREAD_COUNT

1

WebSphere

Service Business Driver

Perf

SUM(builtin:tech.websphere.threadPoolModule.ActiveCount)

THREAD_COUNT

1

Service

Service Business Driver

Perf

SUM(builtin:service.requestCount.total)

TOTAL_EVENTS

1

Hosts (Generic, VMware Virtual Machine, Hyper-v Virtual Machine)

Host

Generic/VM

Perf

builtin:host.cpu.entc

CPU_ENT_UTIL

0.01

Host

Generic/VM

Perf

builtin:host.cpu.usage

CPU_UTIL

0.01

Host

Generic/VM

Perf

builtin:host.cpu.idle

CPU_UTIL_IDLE

0.01

Host

Generic/VM

Perf

builtin:host.cpu.system

CPU_UTIL_SYSTEM

0.01

Host

Generic/VM

Perf

builtin:host.cpu.user

CPU_UTIL_USER

0.01

Host

Generic/VM

Perf

builtin:host.cpu.iowait

CPU_UTIL_WAIO

0.01

Host

Generic/VM

Perf

builtin:host.cpu.idle

CPU_UTIL_OVERHEAD

0.01

Host

Generic/VM

Perf

CPU_UTIL*CPU_NUM

CPU_USED_NUM

1

Host

Generic/VM

Perf

builtin:host.mem.avail.bytes

MEM_FREE

1

Host

Generic/VM

Perf

builtin:host.mem.usage

MEM_UTIL

0.01

Host

Generic/VM

Perf

builtin:host.mem.used

MEM_USED

1

Host

Generic/VM

Perf

SUM(builtin:host.net.bytesRx)

NET_IN_BYTE_RATE

1

Host

Generic/VM

Perf

SUM(builtin:host.net.bytesTx)

NET_OUT_BYTE_RATE

1

Host

Generic/VM

Perf

SUM(builtin:host.net.packets.rxReceived)

NET_IN_PKT_RATE

1

Host

Generic/VM

Perf

SUM(builtin:host.net.packets.rxSent)

NET_OUT_PKT_RATE

1

Host

Generic/VM

Perf

SUM(builtin:host.net.nic.packets.errorsRx)

NET_IN_PKT_ERROR_RATE

1

Host

Generic/VM

Perf

SUM(builtin:host.net.nic.packets.errorsTx)

NET_OUT_PKT_ERROR_RATE

1

Host

Generic/VM

Perf

builtin:host.net.nic.trafficIn

BYIF_IN_BIT_RATE

1

Host

Generic/VM

Perf

builtin:host.net.nic.bytesRx

BYIF_IN_BYTE_RATE

1

Host

Generic/VM

Perf

builtin:host.net.nic.packets.rx

BYIF_IN_PKT_RATE

1

Host

Generic/VM

Perf

builtin:host.net.nic.trafficOut

BYIF_OUT_BIT_RATE

1

Host

Generic/VM

Perf

builtin:host.net.nic.bytesTx

BYIF_OUT_BYTE_RATE

1

Host

Generic/VM

Perf

builtin:host.net.nic.packets.tx

BYIF_OUT_PKT_RATE

1

Host

Generic/VM

Perf

builtin:host.mem.swap.avail

SWAP_SPACE_FREE

1

Host

Generic/VM

Perf

builtin:host.mem.swap.used

SWAP_SPACE_USED

1

Host

Generic/VM

Perf

builtin:host.disk.avail

BYFS_FREE

1

Host

Generic/VM

Perf

builtin:host.disk.bytesRead

BYFS_READ_BYTE_RATE

1

Host

Generic/VM

Perf

builtin:host.disk.bytesWritten

BYFS_WRITE_BYTE_RATE

1

Host

Generic/VM

Perf

builtin:host.disk.usedPct

BYFS_USED_SPACE_PCT

.01

Host

Generic/VM

Perf

builtin:host.disk.queueLength

BYDISK_QUEUE_SIZE

1

Host

Generic/VM

Perf

builtin:host.disk.readOps

BYFS_READ_RATE

1

Host

Generic/VM

Perf

builtin:host.disk.writeOps

BYFS_WRITE_RATE

1

Host

Generic/VM

Perf

builtin:host.disk.used

BYFS_USED

1

Host

Generic/VM

Conf

builtin:host.disk.avail+builtin:host.disk.used

BYFS_SIZE

1

Host

Generic/VM

Perf

builtin:host.disk.inodesTotal

BYFS_TOTAL_INODES

1

Host

Generic/VM

Conf

SUM(builtin:host.disk.avail+builtin:host.disk.used)

TOTAL_FS_SIZE

1

Host

Generic/VM

Conf

SUM(builtin:host.disk.used)

TOTAL_FS_USED

1

Host

Generic/VM

Conf

osType

OS_FAMILY


Host

Generic/VM

Conf

osVersion

OS_TYPE


Host

Generic/VM

Conf

cpuCores

CPU_NUM


Host Axis

Generic/VM

Conf

localCpus

LCPU_NUM


Host

Generic/VM

Conf

logicalCpuCores

CPU_CORES_PER_SOCKET


Host

Generic/VM

Conf

hypervisorType

HW_VENDOR


Host

Generic/VM

Conf

bitness

CPU_FAMILY


Host

Generic/VM

Conf


Management Zone


Services

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

AVG(SUM(builtin:tech.generic.cpu.usage) by HOST)

CPU_UTIL

.01

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

SUM(builtin:tech.generic.mem.pageFaults)

MEM_PAGE_FAULT_RATE

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

SUM(builtin:tech.generic.mem.workingSetSize)

MEM_USED

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

SUM(builtin:tech.generic.network.throughput)

NET_BYTE_RATE

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

SUM(builtin:tech.generic.network.bytesRx)

NET_IN_BYTE_RATE

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

SUM(builtin:tech.generic.network.bytesTx)

NET_OUT_BYTE_RATE

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

SUM(builtin:tech.generic.processCount)

PROCESS_NUM

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Conf

How many hosts this service runs on

SYS_MULTIPLICITY

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

builtin:tech.webserver.threads.active

BYWS_BUSY_WORKERS

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

builtin:tech.webserver.threads.idle

BYWS_IDLE_WORKERS

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Perf

builtin:tech.webserver.threads.max

BYWS_MAX_WORKERS

1

Generic

Application Server Instance, Web Server Instance, Database Server Instance, Virtual Application

Conf


Management Zone


JVM

Application Server Instance

Perf

SUM(builtin:tech.jvm.memory.pool.committed)

NONHEAPMEM_COMMITTED

1

JVM

Application Server Instance

Perf

SUM(builtin:tech.jvm.memory.pool.used)

NONHEAPMEM_USED

1

JVM

Application Server Instance

Perf

SUM(builtin:tech.jvm.memory.gc.activationCount)

GC_EVENTS

1

JVM

Application Server Instance

Perf

AVG(SUM(builtin:tech.jvm.memory.gc.suspensionTime) by Host)

GC_SUSPENSION_TIME

1

JVM

Application Server Instance

Perf

SUM(builtin:tech.jvm.memory.gc.collectionTime)

GC_TIME

1

JVM

Application Server Instance

Perf

SUM(builtin:tech.jvm.memory.runtime.free)

HEAPMEM_FREE

1

JVM

Application Server Instance

Perf

SUM(builtin:tech.jvm.memory.runtime.max)

HEAPMEM_MAX

1

JVM

Application Server Instance

Perf

HEAPMEM_MAX-HEAPMEM_FREE

HEAPMEM_USED

1

JVM

Application Server Instance

Perf

(HEAPMEM_MAX-HEAPMEM_FREE)/HEAPMEM_MAX

HEAPMEM_UTIL

1

WebSphere

Web Server Instance

Perf

SUM(builtin:tech.websphere.connectionPool.connectionPoolModule.FreePoolSize)

CONNECTION_POOL_FREE

1

WebSphere

Web Server Instance

Conf

SUM(builtin:tech.websphere.connectionPool.connectionPoolModule.PoolSize)

CONNECTION_POOL_TOTAL

1

WebSphere

Web Server Instance

Perf

CONNECTION_POOL_TOTAL-CONNECTION_POOL_FREE

CONNECTION_POOL_USED

1

WebSphere

Web Server Instance

Perf

(CONNECTION_POOL_TOTAL-CONNECTION_POOL_FREE)/CONNECTION_POOL_TOTAL

CONNECTION_POOL_UTIL

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.gcCount)

GC_TIME

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.heap.idle)

HEAPMEM_FREE

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.heap.live)

HEAPMEM_USED

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.pool.used)

NONHEAPMEM_USED

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.pool.committed)

NONHEAPMEM_COMMITTED

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.heap.idle+builtin:tech.go.memory.heap.live)

HEAPMEM_MAX

1

Go

Virtual Application

Perf

SUM(builtin:tech.go.memory.heap.live)/SUM((builtin:tech.go.memory.heap.idle+builtin:tech.go.memory.heap.live)

HEAPMEM_UTIL

1

Databases

MySQL

Database Server Instance

Perf

builtin:tech.mysql.innodb_buffer_pool_size

DB_BUFFER_POOL_SIZE

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.innodb_data_reads

DB_PHYSICAL_READS

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.innodb_data_writes

DB_PHYSICAL_WRITES

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.queries

DB_QUERY_RATE

1/60

MySQL

Database Server Instance

Perf

builtin:tech.mysql.com_delete

DB_ROWS_DELETED_RATE

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.com_insert

DB_ROWS_INSERTED_RATE

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.com_update

DB_ROWS_UPDATED_RATE

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.com_select

DB_SELECTE_RATE

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.slow_queries_rate

DB_SLOW_QUERY_PCT

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.threads_connected

DB_SESSIONS

1

MySQL

Database Server Instance

Perf

ruxit.python.mysql:threads_running

DB_ACTIVE_SESSIONS

1

MySQL

Database Server Instance

Perf

builtin:tech.mysql.qcache_hits

DB_BUFFER_CACHE_HIT_RATIO

1

Postgres

Database Server Instance

Perf

builtin:tech.postgresql.tup_updated

DB_ROWS_UPDATED_RATE

1

Postgres

Database Server Instance

Perf

builtin:tech.postgresql.tup_inserted

DB_ROWS_INSERTED_RATE

1

Postgres

Database Server Instance

Perf

builtin:tech.postgresql.tup_deleted

DB_ROWS_DELETED_RATE

1

Postgres

Database Server Instance

Perf

builtin:tech.postgresql.xact_commit

DB_COMMIT_RATE

1

MSSQL

Database Server Instance

Perf

builtin:tech.mssql.page_splits_sec

DB_PAGE_SPLITS

1

MSSQL

Database Server Instance

Perf

builtin:tech.mssql.buffer_cache_hit_ratio

DB_BUFFER_CACHE_HIT_RATIO

1

MSSQL

Database Server Instance

Perf

builtin:tech.mssql.transactions

DB_TRANSACTIONS

1

MSSQL

Database Server Instance

Perf

builtin:tech.mssql.lock_waits_sec

DB_LOCK_WAIT_RATE

1

MSSQL

Database Server Instance

Perf

builtin:tech.mssql.connection_memory

DB_MEM_CONNECTION

1

Generic

Database Server Instance

Conf


MANAGEMENT_ZONE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.active_clients

CLIENT_CONN_CURRENT

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.available_connections

CONNECTION_POOL_FREE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.command_operations2

DB_TRAN_RATE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.current_connections

CONNECTION_POOL_USED

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.current_queue

REQ_QUEUED

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.db_data_size

DB_DATA_USED_SIZE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.db_storage_size

DB_DATA_ALLOCATED_SIZE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.delete_operations2

DB_ROWS_DELETED_RATE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.indexes

DB_INDEX_SEARCHES

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.insert_operations2

DB_ROWS_INSERTED_RATE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.virtual_memory

DB_MEM_VIRTUAL

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.update_operations2

DB_ROWS_UPDATED_RATE

1

MongoDB

Database Server Instance

Perf

builtin:tech.mongodb.query_operations2

DB_QUERY_RATE

1

Application

Application

Application

Conf


MANAGEMENT_ZONE


 
 

Tag Mapping (Optional)

Tags are totally optional. If use Business Service View, machine will have a tag using tag type configured in the ETL. Tag value will be the direct parent tier name.

Show Tags

BMC Helix Continuous Optimization

 Entities

TagType

Tag

Host

Dynatrace App Tier

Service name the hosts contains

Service

ServiceType

Type of the Dynatrace Service

 Custom Metrics

The Dynatrace extractor now is supporting custom metrics on Applications, Services and Hosts. By configuring the JSON file   

Custom Metrics

The following table shows the JSON object and the property mapping. One JSON Object is configured for one metrics for one type of entity. The JSON configuration file contains a list of JSON objects. covering multiple metrics.

JSON Entity

Required

Description

tscoEntityType

Yes

The BMC Helix Continuous Optimization entity type. Currently only applications, services and hosts are supported. Required values ["APPLICATION", "SERVICE", "HOST"]. You can find the entity type by checking out the hierarchy and entity types above.

dynatraceEntityType

Yes

The dynatrace entities. Currently only applications (including applications, custom applications and mobile applications), services and hosts. Required values ["APPLICATION", "SERVICE", "HOST"]. You can find the Dynatrace entity type by looking at the UUID of the targeted entity. For example, if an entity has id APPLICATION-588BA26653BA3FA8, use "Application" as the value of this field. If it's CUSTOM_APPLICATION-299F4619269E4F32 or MOBILE_APPLICATION-752C288D59734C79, you can put in "Application" in this field as well. Make sure this field matches the "tscoEntityType". If not, the entity type in "tscoEntityType" will be used for this field.

dynatraceMetric

Yes

The custom Dyantrace metric. This must not be a metric already imported by the ETL, which are listed in the Metrics Mapping section.

dynatraceAggregation

No

The Dynatrace aggregation types for the metrics. Currently the extractor will only process auto, min, max, average and count as multiple statistics. Make sure to separate each of them with comma ",". If no value is imported, auto will be used.

tscoMetric

Yes

The BMC Helix Continuous Optimization metric that the custom Dynatrace metric is mapping to. You can find the list of all available metrics at Administration → Data Warehouse → Datasets & metrics

dataset

Yes

The dataset of the BMC Helix Continuous Optimization metric. Currently supports "SYS" for sysdat (system metrics), WKLD for wkldat (workload metrics).

metricType

Yes

The type of the BMC Helix Continuous Optimization metric, either "Perf" for performance metris, or "Conf" for configuration metrics

entityIndex

No

The index of the entity dimension maps from Dynatrace API. If this metrics only have 1 dimension (in other word, this metric will be a global metric, rather than a submetric, the index is always should be 0). If leave empty, 0 will be used. For example, a service has a metric, and the service will be imported as entity. Now looking at the metric description from Dynatrace API, the dimension object looks like "dimensions":["SERVICE-299F4619269E4F32"], the we use 0 for entityIndex to import this service,

subMetricIndex

No

The field should be filled up if the values are for entities and its child entities and the submetrics is used. For example, a host's each CPU has its own temperature, and you want to import the temperature for each CPU on the host as BYCPU_TEMPERATURE. HOST will be the entity and CPU name is the sub entity. Now looking at the metric description from Dynatrace API, the dimensions object looks like this "dimensions":["HOST-299F4619269E4F32", "CPU-752C288D59734C79"] then we put entityIndex as 0 for host, and subMetricIndex for 1 for CPU. Then the host HOST-299F4619269E4F32 will be imported as entity, with subresource CPU-752C288D59734C79. Defualt is to leave this field empty if the metric is a global metric.

scaleFactor

No

The scale factor applied to the value as multiplication. If leave empty, there won't be changes for the value. For example, the value is 500, and the scaleFactor is 0.01, the value imported in BMC Helix Continuous Optimization will be 5 as 500*0.01=5.

Here is an example:

Let's say there is a custom metric called web.total.user.exit and it's defined on some services, with the values in gauge format with the multiple statistic summaries (auto, avg, min, max and count).

And this metric web.total.user.exit will be mapped to BMC Helix Continuous Optimization global metric "WEB_TOTAL_USER_EXITS", which is a performance metric in workload dataset.

First, let's check if this custom metrics are created correctly by calling Dynatrace API, by using the following format:

https://<FQDN>/api/v2/metrics/query?metricSelector=<custom metric id>:(<aggregation types>)&entitySelector=type(<entity type>)&api-token=<token>&from=<starting timestamp>&to=<ending timestamp>&resolution=<resolution>

In this case, it will be: https://live.dynatrace.com/api/v2/metrics/query?metricSelector=web.total.user.exit:(auto,avg,min,max,count)&entitySelector=type(SERVICE)&api-token=token1234567&from=1620931200000&to=1620936300000&resolution=15m

Then I got a results like this:

image2021-6-10_16-3-16.png

By verifying the custom metrics from Dynatrace API, now it's time to create the JSON Object for this metric web.total.user.exit

example

explanation

{
  "tscoEntityType": "SERVICE",
  "dynatraceMetric": "web.total.user.exit",
  "dynatraceAggregation": "auto,min",
  "submetricIndex": "",
  "dataset": "WKLD",
  "metricType": "Perf",
  "tscoMetric": "WEB_TOTAL_USER_EXITS",
  "dynatraceEntityType": "SERVICE",
  "scaleFactor": "1",
  "entityIndex": "0"
}

 

{
 "tscoEntityType": the entity that has this metric is a service
 "dynatraceMetric": dynatrace metric id
 "dynatraceAggregation": aggregation type
 "submetricIndex": leave this field empty because this is a global metrics
 "dataset": workload dataset
 "metricType": performance metric
 "tscoMetric": BMC Helix Continuous Optimization metric name
 "dynatraceEntityType": BMC Helix Continuous Optimization entity type that will have this metrics
 "scaleFactor": leave it as 1 since there's no transformation on how
 "entityIndex": set to 0 or leave it to empty because of there's only one kind of entity in dimension map
}

 

[

{

 "tscoEntityType": "SERVICE",
  "dynatraceMetric": "web.total.user.exit",
  "dynatraceAggregation": "auto,min",
  "submetricIndex": "",
  "dataset": "WKLD",
  "metricType": "Perf",
  "tscoMetric": "WEB_TOTAL_USER_EXITS",
  "dynatraceEntityType": "SERVICE",
  "scaleFactor": "1",
  "entityIndex": "0"

     },

{

.................... other metrics

}

]

After the extraction, the metrics on service SERVICE-9C13B59F60731573 appears on workspace as

image2021-6-10_17-17-46.png

This is a link to the sample JSON file here.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

BMC Helix Continuous Optimization 24.2