Moviri Integrator for BMC Helix Capacity Optimization - Dynatrace

"Moviri Integrator for BMC Helix Continuous Optimization – Dynatrace" is an additional component of BMC Helix Continuous Optimization product. It allows extracting data from Dynatrace.  Relevant capacity metrics are loaded into BMC Helix Continuous Optimization which provides advanced analytics over the extracted data.

The integration supports the extraction of both performance and configuration.

The documentation is targeted at BMC Helix Continuous Optimization administrators, in charge of configuring and monitoring the integration between BMC Helix Continuous Optimization and Dynatrace.

Collecting data by using the Dynatrace

To collect data by using the Dynatrace ETL, do the following tasks:

I. Dynatrace Prerequisites.

II. Configure Dynatrace ETL.


Step I. Complete the preconfiguration tasks


StepDetails

Check that the Dynatrace version is supported

  1. Dynatrace SaaS/Managed
  2. API version: v1

Generate an authorization token to access Dynatrace API.

To generate the token:

  1. Login to the Dynatrace environment.

  2. From the left bar, select Settings.

  3. Expand the Integration menu and select Dynatrace API.

    The only required permission is “Access problem and event feed, metrics, and topology”

  4. Enter a name for the token and then click Generate Token.

  5. Copy the generated token and use it in the ETL configuration.

Verify if the token is working

Execute the following command from a Linux console:

  • curl -H "Authorization: Api-Token <token>" https://<dyantrace-fqdn>/api/v1/config/clusterversion


An output similar to the following should be obtained:

{"version": "1.137.79.20180123-105448"}


Step II. Configure the ETL

A. Configuring the basic properties

Some of the basic properties display default values. You can modify these values if required.

To configure the basic properties:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click Add > Add ETL. The Add ETL page displays the configuration properties. You must configure properties in the following tabs: Run configuration, Entity catalog, and Amazon Web Services Connection

  3. On the Run Configuration tab, select Moviri - Dynatrace Extractor from the ETL Module list. The name of the ETL is displayed in the ETL task name field. You can edit this field to customize the name.



  4. Click the Entity catalog tab, and select one of the following options:
    • Shared Entity Catalog:

      • From the Sharing with Entity Catalog list, select the entity catalog name that is shared between ETLs.
    • Private Entity Catalog: Select if this is the only ETL that extracts data from the Dynatrace resources.
  5. Click the Dynatrace - Connection Parameters tab, and configure the following properties:

    PropertyDescription
    Dynatrace URL prefix 

    Dynatrace URL.
    The URL must be in the form "https://{id}.live.dynatrace.com"

    Dynatrace API Token

    Dynatrace API Token (Generated in the Prerequisite)

    Use HTTP Proxy to connect to Dynatrace

    Select yes if a HTTP proxy is required in order to connect to Dynatrace API

    Use HTTPSWhether or not use HTTPS to connect to the HTTP(S) proxy.
    HTTP Proxy AddressHTTP Proxy FQDN or IP Address
    HTTP Proxy PortHTTP(S) proxy server port
    HTTP Proxy UsernameHTTP(S) proxy server username
    HTTP Proxy PasswordHTTP(S) proxy server password

Click the Dynatrace - Extraction tab, and configure the following properties:

PropertyDescription
Default Lastcounter (YYYY-MM-DD HH24:MI:SS)

Initial timestamp from which extract data.

Max Extraction Period (hours)Specify the max extraction period in hours (default is 96 hours / 4 days)
Batch Size

Specify how many entities is extracted at the same time from Dynatrace

Data Resolution

Select at which resolution data will be imported into TSCO.

Possible values are:

  • 5 Minutes
  • 15 Minutes
  • 30 Minutes
  • 1 Hour
Enable Support for Business Service ViewSelect if the Dynatrace ETL should be configured to support the Business Service View
Tag Type for Business Service ViewSpecify which Tag Type should be used to import Service Pools, to be used in the Business Service View (default AppTier)

  1. Click the Dynatrace - Filter tab, and configure the following properties:

    PropertyDescription
    Collect infrastructure metrics

    If selected, the connector will collect metrics at host level (VM, Generic)

    Application Whitelist

    Semicolon separated list of Java Regular Expression that will be used to filter the extracted applications 

    Application Blacklist

    Semicolon separated list of Java Regular Expression that will be used to filter the extracted applications 


The following image shows sample configuration values for the basic properties.

  1. (Optional) Override the default values of properties in the following tabs:

    PropertyDescription
    Module selection

    Select one of the following options:

    • Based on datasource: This is the default selection.
    • Based on Open ETL template: Select if you want to use an open ETL template.
    Module descriptionA short description of the ETL module.
    Execute in simulation modeBy default, the ETL execution in simulation mode is selected to validate connectivity with the data source, and to ensure that the ETL does not have any configuration issues. In the simulation mode, the ETL does not load data into the database. This option is useful when you want to test a new ETL task. To run the ETL in the production mode, select No.
    BMC recommends that you run the ETL in the simulation mode after ETL configuration and then run it in the production mode.
    PropertyDescription
    Associate new entities to

    Specify the domain to which you want to add the entities created by the ETL.

    Select one of the following options:

    • Existing domain: This option is selected by default. Select an existing domain from the Domain list. If the selected domain is already used by other hierarchy rules, select one of the following Domain conflict options:
      • Enrich domain tree: Select to create a new independent hierarchy rule for adding a new set of entities, relations, or both that are not defined by other ETLs.
      • ETL Migration: Select if the new ETL uses the same set of entities, relations, or both that are already defined by other ETLs.
    • New domain: Select a parent domain, and specify a name for your new domain.

    By default, a new domain with the same ETL name is created for each ETL. When the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created in the active state. If you specify a different domain for the ETL, the hierarchy rule is updated automatically.

    PropertyDescription
    Task groupSelect a task group to classify the ETL.
    Running on schedulerSelect one of the following schedulers for running the ETL:
    • Primary Scheduler: Runs on the Application Server.
    • Generic Scheduler: Runs on a separate computer.
    • Remote: Runs on remote computers.
    Maximum execution time before warningIndicates the number of hours, minutes, or days for which the ETL must run before generating warnings or alerts, if any.
    Frequency

    Select one of the following frequencies to run the ETL:

    • Predefined: This is the default selection. Select a daily, weekly, or monthly frequency, and then select a time to start the ETL run accordingly.
    • Custom: Specify a custom frequency, select an appropriate unit of time, and then specify a day and a time to start the ETL run.

  2. Click Save.
    The ETL tasks page shows the details of the newly configured Dynatrace ETL.

(Optional) B. Configuring the advanced properties

You can configure the advanced properties to change the way the ETL works or to collect additional metrics.

To configure the advanced properties:

  1. On the Add ETL page, click Advanced.
  2. Configure the following properties:

    PropertyDescription
    Run configuration nameSpecify the name that you want to assign to this ETL task configuration. The default configuration name is displayed. You can use this name to differentiate between the run configuration settings of ETL tasks.
    Deploy statusSelect the deploy status for the ETL task. For example, you can initially select Test and change it to Production after verifying that the ETL run results are as expected.
    Log levelSpecify the level of details that you want to include in the ETL log file. Select one of the following options:
    • 1 - Light: Select to add the bare minimum activity logs to the log file.
    • 5 - Medium: Select to add the medium-detailed activity logs to the log file.
    • 10 - Verbose: Select to add detailed activity logs to the log file.

    Use log level 5 as a general practice. You can select log level 10 for debugging and troubleshooting purposes.

    Datasets

    Specify the datasets that you want to add to the ETL run configuration. The ETL collects data of metrics that are associated with these datasets.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets from the Available datasets list and click >> to move them to the Selected datasets list.
    3. Click Apply.

    The ETL collects data of metrics associated with the datasets that are available in the Selected datasets list.

    PropertyDescription
    Metric profile selection

    Select the metric profile that the ETL must use. The ETL collects data for the group of metrics that is defined by the selected metric profile.

    • Use Global metric profile: This is selected by default. All the out-of-the-box ETLs use this profile.
    • Select a custom metric profile: Select the custom profile that you want to use from the Custom metric profile list. This list displays all the custom profiles that you have created.
    For more information about metric profiles, see Adding and managing metric profiles.
    Levels up to

    Specify the metric level that defines the number of metrics that can be imported into the database. The load on the database increases or decreases depending on the selected metric level.

    PropertyDescription
    Empty dataset behaviorSpecify the action for the loader if it encounters an empty dataset:
    • Warn: Generate a warning about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.
    ETL log file nameThe name of the file that contains the ETL run log. The default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID
    Maximum number of rows for CSV outputA numeric value to limit the size of the output files.
    CSV loader output file nameThe name of the file that is generated by the CSV loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Capacity Optimization loader output file name

    The name of the file that is generated by the BMC Helix Continuous Optimization loader. The default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID

    Detail mode
    Specify whether you want to collect raw data in addition to the standard data. Select one of the following options:
    • Standard: Data will be stored in the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw also: Data will be stored in the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, and Monthly.
    • Raw only: Data will be stored in the database in a table only at Raw granularity (as available from the original data source).

    For more information, see Accessing data using public views.

    Remove domain suffix from datasource name (Only for systems) Select True to remove the domain from the data source name. For example, server.domain.com will be saved as server. The default selection is False.
    Leave domain suffix to system name (Only for systems)Select True to keep the domain in the system name. For example: server.domain.com will be saved as is. The default selection is False.
    Update grouping object definition (Only for systems)Select True if you want the ETL to update the grouping object definition for a metric that is loaded by the ETL. The default selection is False.
    Skip entity creation (Only for ETL tasks sharing lookup with other tasks)Select True if you do not want this ETL to create an entity and discard data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share a lookup to create a new entity. The default selection is False.
    PropertyDescription
    Hour maskSpecify a value to run the task only during particular hours within a day. For example, 0 – 23 or 1, 3, 5 – 12.
    Day of week maskSelect the days so that the task can be run only on the selected days of the week. To avoid setting this filter, do not select any option for this field.
    Day of month maskSpecify a value to run the task only on the selected days of a month. For example, 5, 9, 18, 27 – 31.
    Apply mask validationSelect False to temporarily turn off the mask validation without removing any values. The default selection is True.
    Execute after timeSpecify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task is run. The task run begins only after the specified time is elapsed.
    EnqueueableSpecify whether you want to ignore the next run command or run it after the current task. Select one of the following options:
    • False: Ignores the next run command when a particular task is already running. This is the default selection.
    • True: Starts the next run command immediately after the current running task is completed.

  3. Click Save.
    The ETL tasks page shows the details of the newly configured Dynatrace ETL.


Step III. Run the ETL

After you configure the ETL, you can run it to collect data. You can run the ETL in the following modes:

A. Simulation mode: Only validates connection to the data source, does not collect data. Use this mode when you want to run the ETL for the first time or after you make any changes to the ETL configuration.

B. Production mode: Collects data from the data source.

A. Running the ETL in the simulation mode

To run the ETL in the simulation mode:

  1. In the console, navigate to Administration ETL & System Tasks, and select ETL tasks.
  2. On the ETL tasks page, click the ETL. The ETL details are displayed.



  3. In the Run configurations table, click Edit  to modify the ETL configuration settings.
  4. On the Run configuration tab, ensure that the Execute in simulation mode option is set to Yes, and click Save.
  5. Click Run active configuration. A confirmation message about the ETL run job submission is displayed.
  6. On the ETL tasks page, check the ETL run status in the Last exit column.
    OK Indicates that the ETL ran without any error. You are ready to run the ETL in the production mode.
  7.  If the ETL run status is Warning, Error, or Failed:
    1. On the ETL tasks page, click  in the last column of the ETL name row.
    2. Check the log and reconfigure the ETL if required.
    3. Run the ETL again.
    4. Repeat these steps until the ETL run status changes to OK.

B. Running the ETL in the production mode

You can run the ETL manually when required or schedule it to run at a specified time.

Running the ETL manually

  1. On the ETL tasks page, click the ETL. The ETL details are displayed.
  2. In the Run configurations table, click Edit  to modify the ETL configuration settings. The Edit run configuration page is displayed.
  3. On the Run configuration tab, select No for the Execute in simulation mode option, and click Save.
  4. To run the ETL immediately, click Run active configuration. A confirmation message about the ETL run job submission is displayed.
    When the ETL is run, it collects data from the source and transfers it to the database.

Scheduling the ETL run

By default, the ETL is scheduled to run daily. You can customize this schedule by changing the frequency and period of running the ETL.

To configure the ETL run schedule:

  1. On the ETL tasks page, click the ETL, and click Edit.

  2. On the Edit task page, do the following, and click Save:

    • Specify a unique name and description for the ETL task.
    • In the Maximum execution time before warning field, specify the duration for which the ETL must run before generating warnings or alerts, if any.
    • Select a predefined or custom frequency for starting the ETL run. The default selection is Predefined.
    • Select the task group and the scheduler to which you want to assign the ETL task.
  3. Click Schedule. A message confirming the scheduling job submission is displayed.
    When the ETL runs as scheduled, it collects data from the source and transfers it to the database.


Step IV. Verify data collection

Verify that the ETL ran successfully and check whether the Dynatrace data is refreshed in the Workspace.

To verify whether the ETL ran successfully:

  1. In the console, click Administration > ETL and System Tasks > ETL tasks.
  2. In the Last exec time column corresponding to the ETL name, verify that the current date and time are displayed.
To verify that the Dynatrace data is refreshed:

  1. In the console, click Workspace.
  2. Expand (Domain name) > Systems > Dynatrace > Instances.
  3. In the left pane, verify that the hierarchy displays the new and updated Dynatrace instances.
  4. Click a Dynatrace entity, and click the Metrics tab in the right pane.
  5. Check if the Last Activity column in the Configuration metrics and Performance metrics tables displays the current date.


Dynatrace Workspace EntityDetails

Entities

TSCO EntitiesDynatrace Entity
Virtual Machine (VMware, Hyper-V) / GenericHosts
Application Server InstanceProcess Groups of JBoss, Tomcat and JVM
Web Server InstanceProcess Groups of Apache HTTPD and Nginx
Database Server InstanceProcess Groups of MySQL and Postgres Databases

Hierarchy

The connector is able to replicate relationships and logical dependencies among these entities. In particular all the available Application are imported with their services and hosts. Additional hosts and services that are not part of a specific application will be imported in another domain tree.

Configuration and Performance metrics mapping


Dyantrace entity
TSCO entity
PERF/CONF
Dynatrace Metric
TSCO Metric
Conversion factor
Business Driver
Key RequestBusiness DriverPerfcom.dynatrace.builtin:servicemethod.responsetimeEVENT_RESPONSE_TIME0.000001
Key RequestBusiness DriverPerfcom.dynatrace.builtin:servicemethod.requestsperminEVENT_RATE1.0 / 60.0
Hosts (Generic, VMware Virtual Machine, Hyper-v Virtual Machine)
HostGeneric/VM
Perf
com.dynatrace.builtin:host.cpu.system + com.dynatrace.builtin:host.cpu.other + com.dynatrace.builtin:host.cpu.user
CPU_UTIL0.001
HostGeneric/VM
Perf
com.dynatrace.builtin:host.cpu.idleCPU_UTIL_IDLE0.001
HostGeneric/VM
Perf
com.dynatrace.builtin:host.cpu.system + com.dynatrace.builtin:host.cpu.otherCPU_UTIL_SYSTEM0.001
HostGeneric/VM
Perf
com.dynatrace.builtin:host.cpu.userCPU_UTIL_USER0.001
HostGeneric/VM
Perf
com.dynatrace.builtin:host.cpu.iowaitCPU_UTIL_WAIO0.001
HostGeneric/VM
Perf
com.dynatrace.builtin:host.cpu.stealCPU_UTIL_OVERHEAD0.001
HostGeneric/VM
Perf
(com.dynatrace.builtin:host.cpu.system + com.dynatrace.builtin:host.cpu.other + com.dynatrace.builtin:host.cpu.user) *
cpuCores
CPU_USED_NUM
HostGeneric/VMPerfcom.dynatrace.builtin:host.mem.availableMEM_FREE
HostGeneric/VMPerfcom.dynatrace.builtin:host.mem.availablepercentageMEM_UTIL
HostGeneric/VMPerfcom.dynatrace.builtin:host.mem.usedMEM_USED
HostGeneric/VMPerfSUM(com.dynatrace.builtin:host.nic.bytesreceived)NET_IN_BYTE_RATE
HostGeneric/VMPerfSUM(com.dynatrace.builtin:host.nic.bytessent)NET_OUT_BYTE_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.nic.bytesreceivedBYIF_IN_BYTE_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.nic.bytessentBYIF_OUT_BYTE_RATE
HostGeneric/VMPerfSUM(com.dynatrace.builtin:host.nic.packetsreceived)NET_IN_PKT_RATE
HostGeneric/VMPerfSUM(com.dynatrace.builtin:host.nic.packetssent)NET_OUT_PKT_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.nic.packetsreceivedBYIF_IN_PKT_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.nic.packetsreceivederrors + com.dynatrace.builtin:host.nic.packetsreceiveddroppedBYIF_IN_PKT_ERROR_RATE
HostGeneric/VMPerfSUM(com.dynatrace.builtin:host.nic.packetsreceivederrors + com.dynatrace.builtin:host.nic.packetsreceiveddropped)NET_IN_PKT_ERROR_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.nic.packetssentBYIF_OUT_PKT_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.nic.packetssenterrors + com.dynatrace.builtin:host.nic.packetssentdroppedBYIF_OUT_PKT_ERROR_RATE
HostGeneric/VMPerfSUM(com.dynatrace.builtin:host.nic.packetssenterrors + com.dynatrace.builtin:host.nic.packetssentdropped)NET_OUT_PKT_ERROR_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.availablespaceBYFS_FREE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.bytesreadBYFS_READ_BYTE_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.byteswrittenBYFS_WRITE_BYTE_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.freespacepercentageBYFS_USED_SPACE_PCT
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.queuelengthBYDISK_QUEUE_SIZE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.readoperationsBYFS_READ_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.writeoperationsBYFS_WRITE_RATE
HostGeneric/VMPerfcom.dynatrace.builtin:host.disk.usedspaceBYFS_USED
HostGeneric/VMerfcom.dynatrace.builtin:host.disk.freespacepercentageBYFS_USED_SPACE_PCT
HostGeneric/VMConfcom.dynatrace.builtin:host.disk.availablespace + com.dynatrace.builtin:host.disk.usedspaceBYFS_SIZE
HostGeneric/VMConfSUM(com.dynatrace.builtin:host.disk.availablespace + com.dynatrace.builtin:host.disk.usedspace)TOTAL_FS_SIZE
HostGeneric/VMConfSUM(com.dynatrace.builtin:host.disk.usedspace)TOTAL_FS_USED
HostGeneric/VMConfcom.dynatrace.builtin:host.mem.available + com.dynatrace.builtin:host.mem.usedTOTAL_REAL_MEM
HostGeneric/VMConfosTypeOS_FAMILY
HostGeneric/VMConfosVersionOS_TYPE
HostGeneric/VMConfipAddressesNET_IP_ADDRESSES
HostGeneric/VMConfcpuCoresCPU_NUM
HostGeneric/VMConflogicalCpuCoresCPU_CORES_PER_SOCKET
HostGeneric/VMConfhypervisorTypeHW_VENDOR
HostGeneric/VMConfbitnessCPU_FAMILY
Services
Tomcat / JBossApplication Server InstancePerfcom.dynatrace.builtin:service.requestsperminBYENDPOINT_TRANSACTION_RATE1 / 60
Tomcat / JBossApplication Server InstancePerfcom.dynatrace.builtin:service.responsetimeBYENDPOINT_RESPONSE_TIME1 / 60
Httpd / NginxWeb Server InstancePerfcom.dynatrace.builtin:service.requestsperminBYENDPOINT_TRANSACTION_RATE1 / 60
Httpd / NginxWeb Server InstancePerfcom.dynatrace.builtin:service.responsetimeBYENDPOINT_RESPONSE_TIME1 / 60
Process Groups

Tomcat / JBoss

Application Server Instance




Perf






com.dynatrace.builtin:pgi.cpu.usage



CPU_UTIL


0.001

Httpd / Nginx

Web Server Instance

Postgres / MySQL

Database Server Instance

Tomcat / JBoss

Application Server Instance



Perf


com.dynatrace.builtin:pgi.mem.usage


MEM_USED




Httpd / Nginx

Web Server Instance

Postgres / MySQL

Database Server Instance

Tomcat / JBoss

Application Server Instance


Perf


com.dynatrace.builtin:pgi.nic.bytesreceived
NET_IN_BYTE_RATE

Httpd / Nginx


Web Server Instance

Postgres / MySQL


Database Server Instance

Tomcat / JBoss


Application Server InstancePerfcom.dynatrace.builtin:pgi.nic.bytessentNET_OUT_BYTE_RATE

Httpd / Nginx


Web Server Instance

Postgres / MySQL


Database Server Instance

Tomcat / JBoss


Application Server InstanceConftechnologyType.type + technologyType.versionVERSION

Httpd / Nginx


Web Server Instance

Postgres / MySQL


Databases
MySQLDatabase Server InstancePerfruxit.python.mysql:innodb_buffer_pool_sizeDB_BUFFER_POOL_SIZE
MySQLDatabase Server InstancePerfruxit.python.mysql:innodb_buffer_pool_size - ((ruxit.python.mysql:innodb_buffer_pool_size / ruxit.python.mysql:innodb_buffer_pool_pages_total) * ruxit.python.mysql:innodb_buffer_pool_pages_free)DB_BUFFER_POOL_USED
MySQLDatabase Server InstancePerf((ruxit.python.mysql:innodb_buffer_pool_size / ruxit.python.mysql:innodb_buffer_pool_pages_total) * ruxit.python.mysql:innodb_buffer_pool_pages_free) / ruxit.python.mysql:innodb_buffer_pool_sizeDB_BUFFER_POOL_UTIL
MySQLDatabase Server InstancePerfruxit.python.mysql:innodb_data_readsDB_PHYSICAL_READS
MySQLDatabase Server InstancePerfruxit.python.mysql:innodb_data_writesDB_PHYSICAL_WRITES
MySQLDatabase Server InstancePerfruxit.python.mysql:queriesDB_QUERY_RATE
MySQLDatabase Server InstancePerfruxit.python.mysql:slow_queries_rateDB_SLOW_QUERY_PCT
MySQLDatabase Server InstancePerfruxit.python.mysql:threads_connectedDB_SESSIONS
MySQLDatabase Server InstancePerfruxit.python.mysql:threads_runningDB_ACTIVE_SESSIONS
PostgresDatabase Server InstancePerfruxit.python.postgresql:cache_hit_ratioDB_BUFFER_CACHE_HIT_RATIO
PostgresDatabase Server InstancePerfruxit.python.postgresql:idx_tup_fetch + ruxit.python.postgresql:seq_tup_readDB_SELECT_RATE
PostgresDatabase Server InstancePerfruxit.python.postgresql:tup_insertedDB_ROWS_UPDATED_RATE
PostgresDatabase Server InstancePerfruxit.python.postgresql:tup_updatedDB_ROWS_INSERTED_RATE
PostgresDatabase Server InstancePerfruxit.python.postgresql:tup_deletedDB_ROWS_DELETED_RATE
PostgresDatabase Server InstancePerfruxit.python.postgresql:xact_commitDB_COMMIT_RATE
MySQL / PostgresDatabase Server InstanceConfsoftwareTechnologies.typeDB_PRODUCT_NAME
MySQL / PostgresDatabase Server InstanceConfsoftwareTechnologies.versionDB_PRODUCT_VERSION
 

Was this page helpful? Yes No Submitting... Thank you

Comments