Page tree
Skip to end of metadata
Go to start of metadata

TrueSight Capacity Optimization supports integration with Amazon Web Service (AWS) to discover and import information for capacity planning through the Amazon Web Service - AWS API Extractor.

The extractor makes API calls for the following Amazon services: EC2Auto scaling, and CloudWatch

The integration uses the AWS Java SDK version 1.9.39. 

Additional resource

  Release Notes: AWS SDK for Java 1.9.39

Prerequisites

  1. Create an IAM user account. You need the access key details of this account during ETL configuration. The access keys include a key ID and a secret key. The AWS SDK requires these keys to automatically sign the requests that the ETL sends to AWS. 
    For more information about managing access keys for IAM users, see  Managing Access Keys for IAM Users .  
  2. Assign the required permissions to the newly created IAM user account. For a sample permissions json file, see Required permissions.

Integration steps

To integrate TrueSight Capacity Optimization with the Amazon Web Services - AWS API Extractor, perform the following task:

  1. Navigate to Administration > ETL & SYSTEM TASKS > ETL tasks.
  2. In the ETL tasks page, click Add > Add ETL, under the Last run tab.

  3. In the Add ETL page, set values for the following properties under each expandable tab.


    Note

    Basic properties are displayed by default in the Add ETL page. These are the most common properties that you can set for an ETL, and it is acceptable to leave the default selections for each as is.

    Basic properties

    Property Description
    Run configuration
    ETL task name By default, this field is populated based on the selected ETL module. You can specify a different name for the ETL Task. Duplicate names are allowed.
    Run configuration name Default name is already filled out for you. This field is used to differentiate different configurations you can specify for the ETL task. You can then run the ETL task based on it.
    Deploy status
    You can select Production or Test to mark the ETL tasks. For example, you can start by marking the task as Test and change it to Production after you have seen that you are getting what you wanted.
    Description (Optional) Enter a brief description for this ETL.
    Log level Select how detailed you want the ETL log to be. The log includes Error, Warning and Info type of log information.
    • 1 - Light: Add bare minimum activity logs to the log file.
    • 5 - Medium: Add medium-detailed activity logs to the log file.
    • 10 - Verbose: Add detailed activity logs to the log file. Info,

    Note: Log levels 5 and 10 are typically used for debugging or troubleshooting ETL issues. Using a log level of 5 is general practice, however, you may choose level 10 to get a high level of detail while troubleshooting.

    Execute in simulation mode Select Yes if you want to to validate the connectivity between the ETL engine and the target, and to ensure that the ETL does not have any other configuration issues.
    When set to Yes, the ETL will not store actual data into the data warehouse. This option is useful while testing a new ETL task.
    ETL module Select Amazon Web Services - AWS API Extractor.
    Module description A link in the user interface that points you to this technical document for the ETL.
    Entity catalog
    Sharing status Select any one:
    • Shared entity catalog: Select this option if more than one ETL extracts data from the given set of resources (for example: CMDB).
      • Sharing with Entity Catalog: Select an entity catalog from the drop-down list.
    • Private entity catalog: Select this option if this is the only ETL that extracts data from the given set of resources.
    Object relationships
    Associate new entities to

    Specify the domain where you want to add the entities created by the ETL. You can select an existing domain or create a new one.

    By default, a new domain is created for each ETL, with the same name of the extractor module. As the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created, with status "active"; if you update the ETL specifying a different domain, the hierarchy rule will be updated automatically.

    Select any one of the following options:

    • New domain: Create a new domain. Specify the following properties:
      • Parent: Select a parent domain for your new domain from the domain selector control.
      • Name: Specify a name for your new domain.
    • Existing domain: Select an existing domain. Make a selection for the following property:
      • Domain: Select an existing domain from the domain selector control.
      • Domain conflict: If the selected domain is already used by other hierarchy rules, you will be asked to select one of the following options to resolve the conflict:
        • Enrich domain tree: Create a new, independent hierarchy rule to add a new set of entities and/or relations that are not defined by other ETLs. For example, this ETL is managing storage, while others are managing servers.
        • ETL Migration: BMC recommends this configuration if the new ETL manages the same set of entities and/or relations (already defined in current domain tree). Typical use case is the migration from one or more ETLs to a new ETL instance. It will stop all relations imported by ETL instances and restore only valid relations after first run. This configuration reuses existing hierarchy rules to correctly manage relation updates.
    Amazon Web Services Connection
    Access Key ID Type the access key ID for the request made to AWS. For example, a typical Access Key ID might look like this: AMAZONACSKEYID007EXAMPLE.
    Secret Access Key Type the secret access key associated with the Access Key ID. For example, a typical Secret Access Key might look like this: wSecRetAcsKeYY712/K9POTUS/BCZthIZIzprvtEXAMPLEKEY .
    Use Proxy

    If you have configured a proxy server to route the internet traffic to and from your AWS environment, you can configure the ETL to connect with your environment via the proxy server. To provide proxy details, select Yes. By default, No is selected.

    If you selected Yes, provide the following information in the respective field:

    • Proxy server host
    • Proxy server port
    • Proxy scheme: Select Https or Http. By default, Https is selected.
    • Is authentication required?: If the proxy server requires username and password for authentication, select Yes. By default, Yes is selected.
      If the proxy server does not require authentication, select No, and skip the following sub-bullets.
      If you selected Yes, specify the following information:
      • Proxy server username
      • Proxy server password
    ETL task properties
    Task group Select a task group to classify this ETL into. it is not necessary to group it into a task group.
    Running on scheduler Select the scheduler over which you want to run the ETL. The type of schedulers available are:
    • Primary Scheduler: Runs on the AS
    • Generic Scheduler: Runs on a separate machine
    • Remote: Runs on different remote machines.
    Maximum execution time before warning The number of hours, minutes or days to execute the ETL for before generating warnings or alerts, if any.
    Frequency Select the frequency for ETL execution. Available options are:
    • Predefined: Select a Predefined frequency from Each Day, Each Week, or Each Month.
    • Custom: Enter a Custom frequency (time interval) as the number of minutes, hours, days, or weeks to run the ETL in. This selection adds the Custom start timestamp property to the ERTL configuration (see below).
    Start timestamp: hour\minute (Applies to Predefined frequency) The HH:MM start timestamp to add to the ETL execution running on a Predefined frequency.
    Custom start timestamp Select a yyyy-mm-dd hh:mm timestamp to add to the ETL execution running on a Custom frequency. (Applicable if you selected Custom under Frequency).

    Note

    To view or configure Advanced properties, click Advanced. You do not need to set or modify these properties unless you want to change the way the ETL works. These properties are for advanced users and scenarios only.

    Advanced properties

    Property Description
    Run configuration
    Datasets

    Enables you to select or deselect metric groups for which data will be populated Available datasets. The Amazon Web Services - AWS API Extractor allows you to select only from the given list of datasets. You cannot include additional datasets to the run configuration of the ETL.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets that you want to exclude from Available datasets and click >> to move them to Selected datasets.
    3. Click Apply.
    Collection level
    Metric profile selection

    Select any one:

    • Use Global metric profile: Select this to use an out-of-the-box global profile, that is available on the Metric profiles page. By default, all ETL modules use this profile.
    • Select a custom metric profile: Any metric profiles you add in the Add metric profile page (Administration > DATAWAREHOUSE > Metric profiles).

    For more information, see Metric profiles.

    Levels up to

    The metric level defines the amount of metric imported into the data warehouse. If you increase the level, additional load is added to the data warehouse while decreasing the metric level reduces the number of imported metrics.

    Choose the metric level to apply on selected metrics:

    • [1] Essential
    • [2] Basic
    • [3] Standard
    • [4] Extended

    For more information, see Aging Class mapping.

    Amazon Web Services Connection
    Instance type definition JSON file path The path where you saved the JSON file that have the instance type configuration metrics. For more information, see the section for collecting data for additional instance type configuration metrics in Collecting additional AWS data for custom metrics and instance types.
    Additional CloudWatch metrics JSON file path The path where you saved the JSON files that have the additional metrics. For more information, see the section for collecting additional CloudWatch metrics in Collecting additional AWS data for custom metrics and instance types.
    Additional properties
    List of properties

    Additional properties can be specified for this ETL that act as user inputs during execution. You can specify values for these properties either at this time, or from the "You can manually edit ETL properties from this page" link that is displayed for the ETL in view mode.

    1. Click Add.
    2. Add an additional property in the etl.additional.prop.n box.
    3. Click Apply.
      Repeat this task to add more properties.
    Loader configuration
    Empty dataset behavior Choose one of the following actions if the loader encounters an empty dataset:
    • Warn: Warn about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.
    ETL log file name Name of the file that contains the ETL execution log; the default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID
    Maximum number of rows for CSV output A number which limits the size of the output files.
    CSV loader output file name Name of the file generated by the CSV loader; the default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Capacity Optimization loader output file name Name of the file generated by the Capacity Optimization loader; the default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Detail mode Select the level of detail:
    • Standard: Data will be stored on the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, Monthly.
    • Raw also: Data will be stored on the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, Monthly.
    • Raw only: Data will be stored on the database in a table only at Raw granularity (as available from the original data source).

    For more information, see Accessing data using public views and Sizing and scalability considerations.

    Reduce priority

    Select either Normal or High.

    Remove domain suffix from datasource name (Only for systems)  If set to True, the domain name is removed from the data source name. For example, server.domain.com will be saved as server.
    Leave domain suffix to system name (Only for systems) (Only for systems) If set to True, the domain name is maintained in the system name. For example: server.domain.com will be saved as such.
    Update grouping object definition (Only for systems) If set to True, the ETL will be allowed to update the grouping object definition for a metric loaded by an ETL.
    Skip entity creation (Only for ETL tasks sharing lookup with other tasks) If set to True, this ETL does not create an entity, and discards data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share lookup to create the new entity.
    Scheduling options
    Hour mask Specify a value to execute the task only during particular hours within the day. For example, 0 – 23 or 1,3,5 – 12.
    Day of week mask Select the days so that the task can be executed only during the selected days of the week. To avoid setting this filter, do not select any option for this field.
    Day of month mask Specify a value to execute the task only during particular days within a month. For example, 5, 9, 18, 27 – 31.
    Apply mask validation By default this property is set to True. Set it to False if you want to disable the preceding Scheduling options that you specified. Setting it to False is useful if you want to temporarily turn off the mask validation without removing any values.
    Execute after time Specify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task must be executed. This means that once the task is scheduled, the task execution starts only after the specified time passes.
    Enqueueable Select one of the following options:
    • False (Default): While a particular task is already running, if the next execution command arises – it is ignored.
    • True: While a particular task is already running, if the next execution command arises – it is placed in a queue and is executed as soon as the current execution ends.
  4. Click Save.
    You return to the Last run tab under the ETL tasks page.
  5. Validate the results in simulation mode: In the ETL tasks table under ETL tasks > Last run, locate your ETL (ETL task name), click  to run the ETL.
    After you run the ETL, the Last exit column in the ETL tasks table will display one of the following values:
    • OK: The ETL executed without any error in simulation mode.
    • WARNING: The ETL execution returned some warnings in simulation mode. Check the ETL log.
    • ERROR: The ETL execution returned errors and was unsuccessful. Edit the active Run configuration and try again.
  6. Switch the ETL to production mode: To do this, perform the following task:
    1. In the ETL tasks table under ETL tasks > Last run, click the ETL under the Name column.
    2. In the Run configurations table in the ETL details page, click  to edit the active run configuration.
    3. In the Edit run configuration page, navigate to the Run configuration expandable tab and set Execute in simulation mode to No.
    4. Click Save.
  7. Locate the ETL in the ETL tasks table and click  to Run it, or schedule an ETL run.
    After you run the ETL, or schedule the ETL for a run, it will extract the data form the source and transfer it to the Capacity Optimization database.

Required permissions

For the ETL to be able to extract Amazon Web Services data, ensure that the user key and secret key that you specify in the ETL integration steps, define a user account with appropriate permissions in Amazon Web Services. To ensure that the user account has appropriate permissions, use a policy json file, based on the following example:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1484736991000",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricData",
                "cloudwatch:GetMetricStatistics",
                "cloudwatch:ListMetrics",
                "ec2:DescribeVolumes",
                "ec2:DescribeHosts",
                "ec2:DescribeRegions",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInstances",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeSnapshots",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribePolicies",
                "autoscaling:DescribeLaunchConfigurations",
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

For more information on required permissions for the three monitored AWS services, see:

  • Amazon EC2:  http://docs.aws.amazon.com/IAM/latest/UserGuide/list_ec2.html
  • Amazon CloudWatch:  http://docs.aws.amazon.com/IAM/latest/UserGuide/list_cloudwatch.html
  • Amazon Auto Scaling:  http://docs.aws.amazon.com/IAM/latest/UserGuide/list_autoscaling.html

For information about IAM policies, see  http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

Entity list

With the extractor, you can obtain the following AWS entities:

  • Cloud
  • Auto Scale Group
  • Virtual Machine
  • Storage Volume

Entity relationship

Parent entity

Child entity

Relationship type

Description

Cloud - AWS

(cl:aws)

Virtual Machine - AWS

(gm:aws)

CL_CONTAINS_GM

 

Cloud - AWS

(cl:aws)

Auto Scale Group - AWS

(asg:aws)

CL_CONTAINS_ASG

 

Virtual Machine - AWS

(gm:aws)

Storage Volume - AWS

(volume:aws)

GM_USES_VOLUME

 

Cloud - AWS

(cl:aws)

Storage Volume - AWS

(volume:aws)

CL_CONTAINS_VOLUME

The ETL will import volumes when one of these conditions are true:

  1. Volume is stand alone
  2. Volume isn't flagged to be destroyed when the instance will be deleted
  3. Volume is related to an instances not involved in Auto scaling Groups
  4. Volume is related to an instances part of a static Auto scaling Group

Auto Scale Group - AWS

(asg:aws)

Virtual Machine - AWS

(gm:aws)

ASG_CONTAINS_GM

Only for Autoscale Group with a manual scaling policy (not dynamic)

Lookup information 

Entity type

Lookup fields

Strong

Weak

Storage Volume - AWS

AWS_ID

Not applicable

Virtual Machine  - AWS

AWS_ID

Not applicable

Cloud - AWS

AWS_NAME

Not applicable

Auto Scale Group - AWS

AWS_ARN

Not applicable


Metric mapping

Volume metrics

TrueSight Capacity Optimization metricAmazon Web Services metricFormula

ST_VOLUME_TRANSFER_READ_BYTE_RATE

VolumeReadBytes

Sum[VolumeReadBytes]/duration

ST_VOLUME_TRANSFER_WRITE_BYTE_RATE

VolumeWriteBytes

Sum[VolumeWriteBytes]/duration

ST_VOLUME_TRANSFER_BYTE_RATE

Derived

(Sum[VolumeReadBytes] + Sum[VolumeWriteBytes])/duration

ST_VOLUME_IO_READ_RATE

VolumeReadOps

Sum[VolumeReadOps]/duration

ST_VOLUME_IO_WRITE_RATE

VolumeWriteOps

Sum[VolumeWriteOps]/duration

ST_VOLUME_IO_RATE

Derived

(Sum[VolumeReadOps] + Sum[VolumeWriteOps])/duration

ST_VOLUME_READ_TIME

VolumeTotalReadTime

Average(VolumeTotalReadTime)

ST_VOLUME_WRITE_TIME

VolumeTotalWriteTime

Average(VolumeTotalWriteTime)

ST_VOLUME_QUEUE_LENGTH

VolumeQueueLength

Average[VolumeQueueLength]

ST_VOLUME_IDLE_PCT

VolumeIdleTime

Sum[VolumeIdleTime]/duration

Instance and Autoscale metrics

TrueSight Capacity Optimization
metric 
Amazon Web Services metricFormula

CPU_UTIL

CPUUtilization

Average[CPUUtilization]/100

NET_IN_BYTE_RATE

NetworkIn

Sum[NetworkIn]/duration

NET_OUT_BYTE_RATE

NetworkOut

Sum[NetworkOut]/duration

NET_BYTE_RATE

Derived

(Sum[NetworkOut] +Sum[NetworkIn])/duration

DISK_READ_RATE

DiskReadBytes

Sum[DiskReadBytes]/duration

DISK_WRITE_RATE

DiskWriteBytes

Sum[DiskWriteBytes]/duration

DISK_TRANSFER_RATE

Derived

(Sum[DiskReadBytes] +Sum[DiskWriteBytes])/duration

DISK_IO_READ_RATE

DiskReadOps

Sum[DiskReadOps]/duration

DISK_IO_WRITE_RATE

DiskWriteOps

Sum[DiskWriteOps]/duration

DISK_IO_RATE

Derived

(Sum[DiskReadOps] +Sum[DiskWriteOps])/duration

GM_ON_NUM
(Only for Autoscale)

GroupInServiceInstances

Maximum[GroupInServiceInstances]

Metrics collected for additional Amazon Web Services monitoring  scripts

These metrics will be available only when the monitoring scripts for Amazon EC2 instances are executed. The mapping to TrueSight Capacity Optimization metrics will work only if the unit of measure for the value is not modified. Refer to the table below for details.

TrueSight Capacity Optimization metricAmazon Web Services metricUnit of measureFormulaSystem
MEM_REAL_UTILMemoryUtilizationPercentage(MemoryUtilization)/100Windows / Linux
MEM_REAL_USEDMemoryUsedMegabyteMemoryUsed * 1024 * 1024Windows / Linux
SWAP_SPACE_UTILSwapUtilizationPercentage(SwapUtilization)/100Linux
SWAP_SPACE_USEDSwapUsedMegabyteSwapUsed * 1024 * 1024Linux
BYFS_USED_SPACE_PCTDiskSpaceUtilizationPercentage(DiskSpaceUtilization)/100Linux
BYFS_USEDDiskSpaceUsedGigabyteDiskSpaceUsed * 1024  * 1024 * 1024Linux

To collect additional data, see Collecting additional AWS data for custom metrics and instance types