Page tree
Skip to end of metadata
Go to start of metadata

BMC TrueSight Capacity Optimization 10.3 supports integration with Amazon Web Service (AWS) through the Amazon Web Service - AWS API Extractor, that is used to discover and import information that is useful for capacity planning into BMC TrueSight Capacity Optimization.

For more information, see: 

 

Integration steps

To integrate BMC TrueSight Capacity Optimization with the Amazon Web Services - AWS API Extractor, perform the following task:

  1. Navigate to Administration >  ETL & SYSTEM TASKS > ETL tasks.
  2. In the ETL tasks page, click Add > Add ETL under the Last run tab.

  3. In the Add ETL page, set values for the following properties under each expandable tab.

    Note

    Basic properties are displayed by default in the Add ETL page. These are the most common properties that you can set for an ETL, and it is acceptable to leave the default selections for each as is.

    Basic properties

    Property Description
    Run configuration
    ETL task name By default, this field is populated based on the selected ETL module. You can specify a different name for the ETL Task. Duplicate names are allowed.
    Run configuration name Default name is already filled out for you. This field is used to differentiate different configurations you can specify for the ETL task. You can then run the ETL task based on it.
    Environment You can select Production or Test to mark the ETL tasks. For example, you can start by marking the task as Test and change it to Production after you have seen that you are getting what you wanted.
    Description (Optional) Enter a brief description for this ETL.
    Log level Select how detailed you want the ETL log to be. The log includes Error, Warning and Info type of log information.
    • 1 - Light: Add bare minimum activity logs to the log file.
    • 5 - Medium: Add medium-detailed activity logs to the log file.
    • 10 - Verbose: Add detailed activity logs to the log file. Info,

    Note: Log levels 5 and 10 are typically used for debugging or troubleshooting ETL issues. Using a log level of 5 is general practice, however, you may choose level 10 to get a high level of detail while troubleshooting.

    Execute in simulation mode Select Yes if you want to to validate the connectivity between the ETL engine and the target, and to ensure that the ETL does not have any other configuration issues.
    When set to Yes, the ETL will not store actual data into the data warehouse. This option is useful while testing a new ETL task.
    ETL module Select Amazon Web Services - AWS API Extractor.
    Module description A link in the user interface that points you to this technical document for the ETL.
    Entity catalog
    Sharing status Select any one:
    • Shared entity catalog: Select this option if more than one ETL extracts data from the given set of resources (for example: CMDB).
      • Sharing with Entity Catalog: Select an entity catalog from the drop-down list.
    • Private entity catalog: Select this option if this is the only ETL that extracts data from the given set of resources.
    Object relationships
    Associate new entities to

    Specify the domain where you want to add the entities created by the ETL. You can select an existing domain or create a new one.

    By default, a new domain is created for each ETL, with the same name of the extractor module. As the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created, with status "active"; if you update the ETL specifying a different domain, the hierarchy rule will be updated automatically.

    Select any one of the following options:

    • New domain: Create a new domain. Specify the following properties:
      • Parent: Select a parent domain for your new domain from the domain selector control.
      • Name: Specify a name for your new domain.
    • Existing domain: Select an existing domain. Make a selection for the following property:
      • Domain: Select an existing domain from the domain selector control.
      • Domain conflict: If the selected domain is already used by other hierarchy rules, you will be asked to select one of the following options to resolve the conflict:
        • Enrich domain tree: Create a new, independent hierarchy rule to add a new set of entities and/or relations that are not defined by other ETLs. For example, this ETL is managing storage, while others are managing servers.
        • ETL Migration: BMC recommends this configuration if the new ETL manages the same set of entities and/or relations (already defined in current domain tree). Typical use case is the migration from one or more ETLs to a new ETL instance. It will stop all relations imported by ETL instances and restore only valid relations after first run. This configuration reuses existing hierarchy rules to correctly manage relation updates.
    Amazon Web Services Connection
    Access Keys — Access keys consist of an Access Key ID and a Secret Access Key. They are used to sign programmatic requests made to AWS whether you're using the AWS SDK, REST, or Query APIs. The AWS SDKs use access keys to sign requests on the user's behalf so that he does not have to handle the signing process. If the AWS SDK is not available, users can sign requests manually.
    Access Key ID Type the Access Key ID for the request made to AWS. For example, a typical Access Key ID might look like this: AMAZONACSKEYID007EXAMPLE.
    Secret Access Key Type the Secret Access Key associated to the Access Key ID you have entered above. For example, a typical Secret Access Key might look like this: wSecRetAcsKeYY712/K9POTUS/BCZthIZIzprvtEXAMPLEKEY .
    ETL task properties
    Task group Select a task group to classify this ETL into. it is not necessary to group it into a task group.
    Running on scheduler Select the scheduler over which you want to run the ETL. The type of schedulers available are:
    • Primary Scheduler: Runs on the AS
    • Generic Scheduler: Runs on a separate machine
    • Remote: Runs on different remote machines.
    Maximum execution time before warning The number of hours, minutes or days to execute the ETL for before generating warnings or alerts, if any.
    Frequency Select the frequency for ETL execution. Available options are:
    • Predefined: Select a Predefined frequency from Each Day, Each Week, or Each Month.
    • Custom: Enter a Custom frequency (time interval) as the number of minutes, hours, days, or weeks to run the ETL in. This selection adds the Custom start timestamp property to the ERTL configuration (see below).
    Start timestamp: hour\minute (Applies to Predefined frequency) The HH:MM start timestamp to add to the ETL execution running on a Predefined frequency.
    Custom start timestamp Select a yyyy-mm-dd hh:mm timestamp to add to the ETL execution running on a Custom frequency. (Applicable if you selected Custom under Frequency).

    Note

    To view or configure Advanced properties, click Advanced. You do not need to set or modify these properties unless you want to change the way the ETL works. These properties are for advanced users and scenarios only.

    Advanced properties

    Property Description
    Run configuration
    Datasets

    Enables you to select or deselect metric groups for which data will be populated Available datasets. The Amazon Web Services - AWS API Extractor allows you to select only from the given list of datasets. You cannot include additional datasets to the run configuration of the ETL.

    1. Click Edit.
    2. Select one (click) or more (shift+click) datasets that you want to exclude from Available datasets and click >> to move them to Selected datasets.
    3. Click Apply.
    Collection level
    Metric profile selection

    Select any one:

    • Use Global metric profile: Select this to use an out-of-the-box global profile, that is available on the Metric profiles page. By default, all ETL modules use this profile.
    • Select a custom metric profile: Any metric profiles you add in the Add metric profile page (Administration > DATAWAREHOUSE > Metric profiles).

    For more information, see Metric profiles.

    Levels up to

    The metric level defines the amount of metric imported into the data warehouse. If you increase the level, additional load is added to the data warehouse while decreasing the metric level reduces the number of imported metrics.

    Choose the metric level to apply on selected metrics:

    • [1] Essential
    • [2] Basic
    • [3] Standard
    • [4] Extended

    For more information, see Aging Class mapping.

    Amazon Web Services Connection
    Instance type definition JSON file path The path where you saved the JSON file that have the instance type configuration metrics. For more information, see Collecting data for additional instance type configuration metrics.
    Additional CloudWatch metrics JSON file path The path where you saved the JSON files that have the additional metrics. For more information, see Collecting data for additional CloudWatch metrics.
    Additional properties
    List of properties

    Additional properties can be specified for this ETL that act as user inputs during execution. You can specify values for these properties either at this time, or from the "You can manually edit ETL properties from this page" link that is displayed for the ETL in view mode.

    1. Click Add.
    2. Add an additional property in the etl.additional.prop.n box.
    3. Click Apply.
      Repeat this task to add more properties.
    Loader configuration
    Empty dataset behavior Choose one of the following actions if the loader encounters an empty dataset:
    • Warn: Warn about loading an empty dataset.
    • Ignore: Ignore the empty dataset and continue parsing.
    ETL log file name Name of the file that contains the ETL execution log; the default value is: %BASE/log/%AYEAR%AMONTH%ADAY%AHOUR%MINUTE%TASKID
    Maximum number of rows for CSV output A number which limits the size of the output files.
    CSV loader output file name Name of the file generated by the CSV loader; the default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Capacity Optimization loader output file name Name of the file generated by the BMC Capacity Optimization loader; the default value is: %BASE/output/%DSNAME%AYEAR%AMONTH%ADAY%AHOUR%ZPROG%DSID%TASKID
    Detail mode Select the level of detail:
    • Standard: Data will be stored on the database in different tables at the following time granularities: Detail (configurable, by default: 5 minutes), Hourly, Daily, Monthly.
    • Raw also: Data will be stored on the database in different tables at the following time granularities: Raw (as available from the original data source), Detail (configurable, by default: 5 minutes), Hourly, Daily, Monthly.
    • Raw only: Data will be stored on the database in a table only at Raw granularity (as available from the original data source).

    For more information, see Accessing data using public views and Sizing and scalability considerations.

    Reduce priority

    Select either Normal or High.

    Remove domain suffix from datasource name (Only for systems)  If set to True, the domain name is removed from the data source name. For example, server.domain.com will be saved as server.
    Leave domain suffix to system name (Only for systems) (Only for systems) If set to True, the domain name is maintained in the system name. For example: server.domain.com will be saved as such.
    Update grouping object definition (Only for systems) If set to True, the ETL will be allowed to update the grouping object definition for a metric loaded by an ETL.
    Skip entity creation (Only for ETL tasks sharing lookup with other tasks) If set to True, this ETL does not create an entity, and discards data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share lookup to create the new entity.
    Scheduling options
    Hour mask Specify a value to execute the task only during particular hours within the day. For example, 0 – 23 or 1,3,5 – 12.
    Day of week mask Select the days so that the task can be executed only during the selected days of the week. To avoid setting this filter, do not select any option for this field.
    Day of month mask Specify a value to execute the task only during particular days within a month. For example, 5, 9, 18, 27 – 31.
    Apply mask validation By default this property is set to True. Set it to False if you want to disable the preceding Scheduling options that you specified. Setting it to False is useful if you want to temporarily turn off the mask validation without removing any values.
    Execute after time Specify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task must be executed. This means that once the task is scheduled, the task execution starts only after the specified time passes.
    Enqueueable Select one of the following options:
    • False (Default): While a particular task is already running, if the next execution command arises – it is ignored.
    • True: While a particular task is already running, if the next execution command arises – it is placed in a queue and is executed as soon as the current execution ends.
  4. Click Save.
    You return to the Last run tab under the ETL tasks page.
  5. Validate the results in simulation mode: In the ETL tasks table under ETL tasks > Last run, locate your ETL (ETL task name), click  to run the ETL.
    After you run the ETL, the Last exit column in the ETL tasks table will display one of the following values:
    • OK: The ETL executed without any error in simulation mode.
    • WARNING: The ETL execution returned some warnings in simulation mode. Check the ETL log.
    • ERROR: The ETL execution returned errors and was unsuccessful. Edit the active Run configuration and try again.
  6. Switch the ETL to production mode: To do this, perform the following task:
    1. In the ETL tasks table under ETL tasks > Last run, click the ETL under the Name column.
    2. In the Run configurations table in the ETL details page, click  to edit the active run configuration.
    3. In the Edit run configuration page, navigate to the Run configuration expandable tab and set Execute in simulation mode to No.
    4. Click Save.
  7. Locate the ETL in the ETL tasks table and click to Run it, or schedule an ETL run.
    After you run the ETL, or schedule the ETL for a run, it will extract the data form the source and transfer it to the BMC Capacity Optimization database.

Metric mapping

Time resolution = 3600 seconds.

Volume metrics

BMC TrueSight Capacity Optimization metricAmazon Web Services metricFormula

ST_VOLUME_TRANSFER_READ_BYTE_RATE

VolumeReadBytes

Sum[VolumeReadBytes]/duration

ST_VOLUME_TRANSFER_WRITE_BYTE_RATE

VolumeWriteBytes

Sum[VolumeWriteBytes]/duration

ST_VOLUME_TRANSFER_BYTE_RATE

Derived

(Sum[VolumeReadBytes] + Sum[VolumeWriteBytes])/duration

ST_VOLUME_IO_READ_RATE

VolumeReadOps

Sum[VolumeReadOps]/duration

ST_VOLUME_IO_WRITE_RATE

VolumeWriteOps

Sum[VolumeWriteOps]/duration

ST_VOLUME_IO_RATE

Derived

(Sum[VolumeReadOps] + Sum[VolumeWriteOps])/duration

ST_VOLUME_READ_TIME

VolumeTotalReadTime

Average(VolumeTotalReadTime)

ST_VOLUME_WRITE_TIME

VolumeTotalWriteTime

Average(VolumeTotalWriteTime)

ST_VOLUME_QUEUE_LENGTH

VolumeQueueLength

Average[VolumeQueueLength]

ST_VOLUME_IDLE_PCT

VolumeIdleTime

Sum[VolumeIdleTime]/duration

Instance and Autoscale metrics

BMC TrueSight Capacity Optimization
metric 
Amazon Web Services metricFormula

CPU_UTIL

CPUUtilization

Average[CPUUtilization]/100

NET_IN_BYTE_RATE

NetworkIn

Sum[NetworkIn]/duration

NET_OUT_BYTE_RATE

NetworkOut

Sum[NetworkOut]/duration

NET_BYTE_RATE

Derived

(Sum[NetworkOut] +Sum[NetworkIn])/duration

DISK_READ_RATE

DiskReadBytes

Sum[DiskReadBytes]/duration

DISK_WRITE_RATE

DiskWriteBytes

Sum[DiskWriteBytes]/duration

DISK_TRANSFER_RATE

Derived

(Sum[DiskReadBytes] +Sum[DiskWriteBytes])/duration

DISK_IO_READ_RATE

DiskReadOps

Sum[DiskReadOps]/duration

DISK_IO_WRITE_RATE

DiskWriteOps

Sum[DiskWriteOps]/duration

DISK_IO_RATE

Derived

(Sum[DiskReadOps] +Sum[DiskWriteOps])/duration

GM_ON_NUM
(Only for Autoscale)

GroupInServiceInstances

Maximum[GroupInServiceInstances]

Metrics collected for additional Amazon Web Services monitoring  scripts

These metrics will be available only when the monitoring scripts for Amazon EC2 instances are executed. The mapping to BMC TrueSight Capacity Optimization metrics will work only if the unit of measure for the value is not modified. Refer to the table below for details.

BMC TrueSight Capacity Optimization metricAmazon Web Services metricUnit of measureFormulaSystem
MEM_REAL_UTILMemoryUtilizationPercentage(MemoryUtilization)/100Windows / Linux
MEM_REAL_USEDMemoryUsedMegabyteMemoryUsed * 1024 * 1024Windows / Linux
SWAP_SPACE_UTILSwapUtilizationPercentage(SwapUtilization)/100Linux
SWAP_SPACE_USEDSwapUsedMegabyteSwapUsed * 1024 * 1024Linux
BYFS_USED_SPACE_PCTDiskSpaceUtilizationPercentage(DiskSpaceUtilization)/100Linux
BYFS_USEDDiskSpaceUsedGigabyteDiskSpaceUsed * 1024  * 1024 * 1024Linux

Collecting data for additional instance type configuration metrics

An out-of-the-box JSON file contains the mapping for configuration metrics collected by the Amazon Web Services - AWS API Extractor. This file is stored on the ETL Engine server. This ETL provides a feature that allows you to configure the collection of data for additional instance type configuration metrics or modify the existing collection configuration.

Follow the procedure given below to upload a JSON file for collecting additional instance type configuration metrics:

  1. Download the attached file and update it.
  2. Update the file for existing and additional metrics.
  3. Use the version parameter to add the current date (yyyymmdd format) to the JSON file.

    Info

    This parameter is used to determine the most recently updated file, which is then used by the AWS API Extractor.

  4. Save the JSON file to your local machine. For example, aws-metric-conf.json.
  5. Upload the file to a folder on the ETL engine server.
  6. Click Advanced and navigate to the Amazon Web Services Connection tab.
  7. Type the file path in Instance type definition JSON file path.
  8. Click Save.

Tip

  • If you need to collect additional metrics (in addition to the existing configuration), navigate to the path specified in Step 7 and download the latest JSON file. You must update this file, and upload the consolidated version to the same location. Update the version parameter with the current date.
  • Use the JSON format code block below to specify additional instance type configuration metrics for which you want to collect additional configuration information.


Collecting configuration information
{
	"version": "20150130",
	"instanceTypeConfiguration" :{
		"m3.medium" :{
			"CPU_NUM"		: "1",
			"TOTAL_REAL_MEM": "4026531840",
			"CPU_MHZ"		: "2500",
			"CPU_MODEL"		: "Intel Xeon E5-2670 v2",
			"DISK_SIZE" 	: "4000000000"
		},   
		"AWS_INSTANCE_TYPE_NAME" :{                  
			"CO_METRIC_NAME_1" : "VALUE",
			"CO_METRIC_NAME_2" :{
				"CO_METRIC_SUBRESOURCE_2" : "VALUE"
			}
		}
	}
}

Collecting data for additional CloudWatch metrics

An out-of-the-box JSON file contains the mapping for CloudWatch metrics collected by the Amazon Web Services - AWS API Extractor. This file is stored on the ETL Engine server. This ETL provides a feature that allows you to configure the collection of data for additional CloudWatch metrics.

To collect data for additional CouldWatch metrics, you must create a performance JSON file. Follow the procedure given below to create and upload a JSON file for collecting additional CloudWatch metrics:

  1. Create and save a JSON file using the sample code provided below. For example, aws-metric-perf.json.
  2. Upload the file to a folder on the ETL engine server.
  3. Click Advanced and navigate to the Amazon Web Services Connection tab.
  4. Type the file path in Additional CloudWatch metrics JSON file path.
  5. Click Save.

Tip

Use the JSON code below to specify additional Amazon CouldWatch metrics for which you want to collect performance information.

Collecting performance information
{
    "systemTypeMap" :
	//Amazon Web Services to BMC TrueSight Capacity Optimization
	// entity type mapping
	{ 
		"gm:aws"		: "EC2 Instance",
        "asg:aws"		: "Auto Scaling group",
         "volume:aws"	: "EBS Volume"
	},
	"typeConfiguration": [
	{
		"entypenm" : "gm:aws|asg:aws",
		//Metrics for "Instance" or "Autoscale group"
		"mapping"  : [
		//Mapping definition
			{
				"targetMetric"	: "NET_IN_BYTE_RATE",
				//BMC TrueSight Capacity Optimization metric
				"formula"    	: "(NetworkIn)/period",
				//Formula to compute the above metric using the CloudWatch metric
				"sourceMeters"	:	[{
				//Method used for collecting the value from CloudWatch
					"awsMetric"    : "NetworkIn",
					//Amazon Web Services CloudWatch metric
					"statistic"    : "Sum"
					//This can be Average, Sum, SampleCount, Minimum, or Maximum
									}]
			},
			
			{
			//Additional metric mappings
			}
	}
	{
		"entypenm" : "volume:aws",
		//Metrics for "EBS Volume"
		"mapping"  : [
			{
				"targetMetric"	: "ST_VOLUME_TRANSFER_READ_BYTE_RATE",
				"formula"    	:"(VolumeReadBytes)/duration",
				"sourceMeters"	:	[{
					"awsMetric"    : "VolumeReadBytes",
					"statistic"    : "Sum",
									}]
			},
			{
                "targetMetric"	:"BYFS_USED_SPACE_PCT",
                "formula"		:"(DiskSpaceUtilization)/100",
                "sourceMeters"	:[{
                "awsMetric"		:"DiskSpaceUtilization",
				"statistic"		:"Average",
				"dimesionForSubObject"	:"Filesystem",
				"namespace"		: "System/Linux"
                }]
			},
			{
			//Additional metric mappings
			}
	}

						]
}

Entity relationship

ID

Name

Parent

Child

Description

80

CL_CONTAINS_GM

cl:aws

gm:aws

 

81

CL_CONTAINS_ASG

cl:aws

asg:aws

 

82

GM_USES_VOLUME

gm:aws

volume:aws

 

83

CL_CONTAINS_VOLUME

cl:aws

volume:aws

The ETL will import volumes when one of these conditions are true:

  1. Volume is stand alone
  2. Volume isn't flagged to be destroyed when the instance will be deleted
  3. Volume is related to an instances not involved in Auto scaling Groups
  4. Volume is related to an instances part of a static Auto scaling Group

84

ASG_CONTAINS_GM

asg:aws

gm:aws

Only for Autoscale Group with a manual scaling policy (not dynamic)

Lookup information

Entity type

Strong lookup fields

Weak lookup fields

volume:aws

AWS_ID

-

gm:aws

AWS_ID

-

cl:aws

AWS_NAME

-

asg:aws

AWS_ARN

-