BMC TrueSight Capacity Optimization 10.3 supports integration with Amazon Web Service (AWS) through the Amazon Web Service - AWS API Extractor, that is used to discover and import information that is useful for capacity planning into BMC TrueSight Capacity Optimization.
For more information, see:
To integrate BMC TrueSight Capacity Optimization with the Amazon Web Services - AWS API Extractor, perform the following task:
In the ETL tasks page, click Add > Add ETL under the Last run tab.
In the Add ETL page, set values for the following properties under each expandable tab.
Basic properties are displayed by default in the Add ETL page. These are the most common properties that you can set for an ETL, and it is acceptable to leave the default selections for each as is.
|ETL task name||By default, this field is populated based on the selected ETL module. You can specify a different name for the ETL Task. Duplicate names are allowed.|
|Run configuration name||Default name is already filled out for you. This field is used to differentiate different configurations you can specify for the ETL task. You can then run the ETL task based on it.|
|Environment||You can select Production or Test to mark the ETL tasks. For example, you can start by marking the task as Test and change it to Production after you have seen that you are getting what you wanted.|
|Description||(Optional) Enter a brief description for this ETL.|
|Log level||Select how detailed you want the ETL log to be. The log includes Error, Warning and Info type of log information.
Note: Log levels 5 and 10 are typically used for debugging or troubleshooting ETL issues. Using a log level of 5 is general practice, however, you may choose level 10 to get a high level of detail while troubleshooting.
|Execute in simulation mode||Select Yes if you want to to validate the connectivity between the ETL engine and the target, and to ensure that the ETL does not have any other configuration issues.
When set to Yes, the ETL will not store actual data into the data warehouse. This option is useful while testing a new ETL task.
|ETL module||Select Amazon Web Services - AWS API Extractor.|
|Module description||A link in the user interface that points you to this technical document for the ETL.|
|Sharing status||Select any one:
|Associate new entities to||
Specify the domain where you want to add the entities created by the ETL. You can select an existing domain or create a new one.
By default, a new domain is created for each ETL, with the same name of the extractor module. As the ETL is created, a new hierarchy rule with the same name of the ETL task is automatically created, with status "active"; if you update the ETL specifying a different domain, the hierarchy rule will be updated automatically.
Select any one of the following options:
Amazon Web Services Connection
|Access Keys — Access keys consist of an Access Key ID and a Secret Access Key. They are used to sign programmatic requests made to AWS whether you're using the AWS SDK, REST, or Query APIs. The AWS SDKs use access keys to sign requests on the user's behalf so that he does not have to handle the signing process. If the AWS SDK is not available, users can sign requests manually.|
|Access Key ID||Type the Access Key ID for the request made to AWS. For example, a typical Access Key ID might look like this:
|Secret Access Key||Type the Secret Access Key associated to the Access Key ID you have entered above. For example, a typical Secret Access Key might look like this:
ETL task properties
|Task group||Select a task group to classify this ETL into. it is not necessary to group it into a task group.|
|Running on scheduler||Select the scheduler over which you want to run the ETL. The type of schedulers available are:
|Maximum execution time before warning||The number of hours, minutes or days to execute the ETL for before generating warnings or alerts, if any.|
|Frequency||Select the frequency for ETL execution. Available options are:
|Start timestamp: hour\minute (Applies to Predefined frequency)||The HH:MM start timestamp to add to the ETL execution running on a Predefined frequency.|
|Custom start timestamp||Select a yyyy-mm-dd hh:mm timestamp to add to the ETL execution running on a Custom frequency. (Applicable if you selected Custom under Frequency).|
To view or configure Advanced properties, click Advanced. You do not need to set or modify these properties unless you want to change the way the ETL works. These properties are for advanced users and scenarios only.
Enables you to select or deselect metric groups for which data will be populated Available datasets. The Amazon Web Services - AWS API Extractor allows you to select only from the given list of datasets. You cannot include additional datasets to the run configuration of the ETL.
|Metric profile selection||
Select any one:
For more information, see Metric profiles.
|Levels up to||
The metric level defines the amount of metric imported into the data warehouse. If you increase the level, additional load is added to the data warehouse while decreasing the metric level reduces the number of imported metrics.
Choose the metric level to apply on selected metrics:
For more information, see Aging Class mapping.
Amazon Web Services Connection
|Instance type definition JSON file path||The path where you saved the JSON file that have the instance type configuration metrics. For more information, see Collecting data for additional instance type configuration metrics.|
|Additional CloudWatch metrics JSON file path||The path where you saved the JSON files that have the additional metrics. For more information, see Collecting data for additional CloudWatch metrics.|
|List of properties||
Additional properties can be specified for this ETL that act as user inputs during execution. You can specify values for these properties either at this time, or from the "You can manually edit ETL properties from this page" link that is displayed for the ETL in view mode.
|Empty dataset behavior||Choose one of the following actions if the loader encounters an empty dataset:
|ETL log file name||Name of the file that contains the ETL execution log; the default value is:
|Maximum number of rows for CSV output||A number which limits the size of the output files.|
|CSV loader output file name||Name of the file generated by the CSV loader; the default value is:
|Capacity Optimization loader output file name||Name of the file generated by the BMC Capacity Optimization loader; the default value is:
|Detail mode||Select the level of detail:
Select either Normal or High.
|Remove domain suffix from datasource name (Only for systems)||If set to True, the domain name is removed from the data source name. For example,
|Leave domain suffix to system name (Only for systems)|| (Only for systems) If set to True, the domain name is maintained in the system name. For example:
|Update grouping object definition (Only for systems)||If set to True, the ETL will be allowed to update the grouping object definition for a metric loaded by an ETL.|
|Skip entity creation (Only for ETL tasks sharing lookup with other tasks)||If set to True, this ETL does not create an entity, and discards data from its data source for entities not found in Capacity Optimization. It uses one of the other ETLs that share lookup to create the new entity.|
|Hour mask||Specify a value to execute the task only during particular hours within the day. For example, 0 – 23 or 1,3,5 – 12.|
|Day of week mask||Select the days so that the task can be executed only during the selected days of the week. To avoid setting this filter, do not select any option for this field.|
|Day of month mask||Specify a value to execute the task only during particular days within a month. For example, 5, 9, 18, 27 – 31.|
|Apply mask validation||By default this property is set to True. Set it to False if you want to disable the preceding Scheduling options that you specified. Setting it to False is useful if you want to temporarily turn off the mask validation without removing any values.|
|Execute after time||Specify a value in the hours:minutes format (for example, 05:00 or 16:00) to wait before the task must be executed. This means that once the task is scheduled, the task execution starts only after the specified time passes.|
|Enqueueable||Select one of the following options:
Time resolution = 3600 seconds.
|BMC TrueSight Capacity Optimization metric||Amazon Web Services metric||Formula|
(Sum[VolumeReadBytes] + Sum[VolumeWriteBytes])/duration
(Sum[VolumeReadOps] + Sum[VolumeWriteOps])/duration
|BMC TrueSight Capacity Optimization|
|Amazon Web Services metric||Formula|
These metrics will be available only when the monitoring scripts for Amazon EC2 instances are executed. The mapping to BMC TrueSight Capacity Optimization metrics will work only if the unit of measure for the value is not modified. Refer to the table below for details.
|BMC TrueSight Capacity Optimization metric||Amazon Web Services metric||Unit of measure||Formula||System|
|MEM_REAL_UTIL||MemoryUtilization||Percentage||(MemoryUtilization)/100||Windows / Linux|
|MEM_REAL_USED||MemoryUsed||Megabyte||MemoryUsed * 1024 * 1024||Windows / Linux|
|SWAP_SPACE_USED||SwapUsed||Megabyte||SwapUsed * 1024 * 1024||Linux|
|BYFS_USED||DiskSpaceUsed||Gigabyte||DiskSpaceUsed * 1024 * 1024 * 1024||Linux|
An out-of-the-box JSON file contains the mapping for configuration metrics collected by the Amazon Web Services - AWS API Extractor. This file is stored on the ETL Engine server. This ETL provides a feature that allows you to configure the collection of data for additional instance type configuration metrics or modify the existing collection configuration.
Follow the procedure given below to upload a JSON file for collecting additional instance type configuration metrics:
version parameter to add the current date (
yyyymmdd format) to the JSON file.
This parameter is used to determine the most recently updated file, which is then used by the AWS API Extractor.
An out-of-the-box JSON file contains the mapping for CloudWatch metrics collected by the Amazon Web Services - AWS API Extractor. This file is stored on the ETL Engine server. This ETL provides a feature that allows you to configure the collection of data for additional CloudWatch metrics.
To collect data for additional CouldWatch metrics, you must create a performance JSON file. Follow the procedure given below to create and upload a JSON file for collecting additional CloudWatch metrics:
Use the JSON code below to specify additional Amazon CouldWatch metrics for which you want to collect performance information.
The ETL will import volumes when one of these conditions are true:
Only for Autoscale Group with a manual scaling policy (not dynamic)
Strong lookup fields
Weak lookup fields