Moviri Integrator for BMC Helix Capacity Optimization - ElasticSearch

 

"Moviri Integrator for BMC Helix Capacity Optimization – ElasticSearch" enables the setup of a continuous data flow between Elasticsearch data extraction and BMC Helix Capacity Optimization for capacity relevant metrics. 

The integration comprises two connectors, targeted at different data transfer scenarios:

  • Elasticsearch Generic: allows to import almost any kind of KPI, related to both business metrics or infrastructure utilization, that can be queried out by Elasticsearch, via search and aggregation. 

  • Elasticsearch Unix and Windows: imports performance counters for Unix and Windows systems, that are extracted by Elasticsearch from MetricBeat, part of Elastic Stack, that are  responsible for monitoring system performance data.   

Moviri Integrator for BMC Helix Capacity Optimization - Elasticsearch is compatible with BMC Helix Capacity Optimization 19.11 and onward.


Requirements

Supported versions of data source software

Elasticsearch 6.8 and above. 

Supported configurations of data source software

The "Moviri Integrator for BMC Helix Capacity Optimization – Elasticsearch Extractor uses Rest API to access Elasticsearch server, which usually exposed on port 9200. Besides access to the Rest API for Elasticsearch and Elasticsearch server, each components requires some extra information:

The "Moviri Integrator forBMC Helix Capacity Optimization – Elasticsearch (Unix and Windows)" connector requires:

  • Unix, Linux or Windows systems, whose data the connector needs to extract, to be monitored by Elastic Beat - Metricbeat. The metricbeat needs to be installed and configured on the systems that need to be monitored, and the “System” module on Metricbeat needs to be enabled. Check the documentation on how to configure Metricbeat. 

  • After configure the Metricbeat, metricbeat will be exposed to Elasticsearch as indexes, as “metricbeat-*”. The user who configured on ETL Engine needs access to the metricbeat-* index. 

The "Moviri Integrator for BMC Helix Capacity Optimization– Elasticsearch (Generic)" connector requires:

  • Generic module allows queries or aggregations performed on any indexes, that to be said, the access right to the queried indexes are necessary.

Installation

Downloading the additional package

ETL Modules are made available in the form of an additional components, which you may download from BMC electronic distribution site (EPD) or retrieve from your content media.

Installing the additional package

 To install the connector in the form of a BMC Helix Capacity Optimization additional package, refer to Performing system maintenance tasks instructions.

Datasource Check and Configuration

All the connectors included in "Moviri Integrator for BMC Helix Capacity Optimization – Elasticsearch" use the Elasticsearch REST API to communicate with Elasticsearch. This is always enabled and no additional configuration is required The connector Elasticsearch local-user to access Elasticsearch Rest API. 

The connector requires a user with the role which has privileges over the following entities

  • Cluster: read_ccr

  • Indexes: Indices read

For instance, the Kibana page manages the role who have cluster’s read_ccr right and index kibana_sample_data_logs read right. Then assign this role to user. For more information on how to configure access role and assign to a user, or create a role mapping, check the documentation.

Verify Access:

Unix, Linux and Windows: 

Metricbeats needed to be installed on all the host machines, and “System” module needs to be enabled.

the user needs a read right to the metricbeat index. If you want to query among entire index based on regular expression, the user needs to have rights to all the metricbeat index meet the regular expression. By Default the metricbeat uses metribeat-* as index name. However, if you modify the metricbeat index name, you need to specify in the Elasticsearch Extractor’s configuration. 

Run the following curl on the ETL engine machine see if you can query the index:

curl POST -k -v -H ”"Content-Type: application/json" -d ‘{"_source":["event.module"], "query": {"bool": {"filter": [{"term":{"event.module":"system"}}]}}}’ <http/https><FQDN>/<metricbeat-index>/_search??scroll=10s&size=5 -u <user-name> -p <password>


If success, you will see something like this:

{
    "_scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAACGKEWSG1wTnZ4Ni1UdC1OVTZxUjVVMG5xQQ==",
    "took": 1,
    "timed_out": false,
    "_shards": {
        "total": 1,
        "successful": 1,
        "skipped": 0,
        "failed": 0
    },
    "hits": {
        "total": {
            "value": 14074,
            "relation": "eq"
        },
        "max_score": 1.0,
        "hits": [
            {
                "_index": "kibana_sample_data_logs",
                "_type": "_doc",


Generic Exractor:

use the following curl command on index you want to query on:


curl -k -v  <http/https><FQDN>/<metricbeat-index>/_search??scroll=10s&size=5 -u <user-name> -p <password>


Connectors configuration

Common settings for all connectors

The following are the common settings valid for all connectors of "Moviri Integrator for BMC Helix Capacity Optimization - Elasticsearch", they are presented in the "Elasticsearch - Setting" configuration tab.

Property Name

Value Type

Required?

Default

Description

Elasticsearch Url 

String

Yes

 

The web address where the Elasticsearch instance can be reached, format http/https://<fqdn><:port>

Authentication Method

String 

Yes


The authentication method, either None, or Basic, which uses a username and password

Username

String

No

 

Username when authentication method is set to Basic

Password

String

No

 

Password when authentication method is set to Basic






Default last counter

Date

Yes

 

Date and time to extract the extraction from, in case of first execution.

Max Hour to extract

Integer

Yes

24

Maximum number of hour’s' worth of data to extract in a single execution.

Data granularity

String

Yes

1h

The granularity of the extracted data.

How many second should a scroll id live(default is 60)

String

No

60

How long should a scroll id live

See the pictures for that:

See further specific instructions for each extractor:

 Configuring Elasticsearch Generic Extractor

The "Moviri– Elasticsearch Generic Extractor" connector aims at importing almost any Business Driver or System metric contained in your Elasticsearch instance that are not specifically mapped by the other Elasticsearch connectors provided by Moviri.

It works in a similar fashion to the built-in 'Generic' connectors available in BMC Helix Capacity Optimization and its main use case is the analysis from a capacity management perspective of custom metrics, for example:

  • Baselining

  • Historical analysis, seasonality identification

  • Trending and forecasting

  • Correlation with other metrics (specially infrastructure utilization) already present in BMC Helix Capacity Optimization (or in their turn imported from Elasticsearch) to enable capacity modeling and what-if scenarios


In order to do so it must be provided with:

  • A Elasticsearch query for retrieving results or A Elasticsearch’s aggregation for provide aggregated values over time frame

  • How the results of the query map into BMC Helix Capacity Optimization data model


At each execution, the connector

  • Executes the provided search query, or aggregation

  • Transform with timeframe and data granulation

  • Retrieves the result set of the query

  • Transforms the result set according to the specified mapping, producing BMC Helix Capacity Optimization Datasets

  • Load the Datasets into CO

  • Stores the most recent timestamp to be used as lower time boundary in the next execution

How to specify the search query for retrieving results

  1. There are two parts can be used for retrieving the results: a search query and aggregation:

    1. Query: Elasticsearch Extractor supports Query DSL, then the Extractor will add a time query to make sure the results are fitting in the timeframe with selected time interval. Starting from “query”:<{input your query including the {}}>.

    2. Aggregation: Elasticsearch Extractor supports Metric Aggregations and Bucket Aggregation. After that, the Extractor will put a date histogram aggregation on top of the aggregation, to build the timeframe and date interval based on data extraction period and time interval. Starting from the name of your aggregation.

Here's some example of how to set up query text and aggregations in ETL configuration:

Query:

  • Elasticsearch query is in the section "query" "query":{<query object>}. In TSCO, set up the query text from the query object bracket. {<query object>} is what need to be inputted into query text field. 
  • Make sure query text input starts with "{" and ends with "}"
  • Example 1: get entities that contains "restart" in field "message"
    The following is the query object input into query text, ask for message field to be presented and contains "restart"

    {
    	"bool": {
                "filter": [{
                    "exists": {
                        "field": "message"
                    }
                },
                {
                	"regexp": {
                		"message":".*restart.*"
                	}
                }
                ]
            }
        }

    ETL will automatically add the time range filter based on the timestamp on this query. Note that the timestamp will be added only when use a compound query.  Unless specified in the query, leaf query and full text query will download all the data, then filter out timestamps based on last counter.

    ,{
                	"range":{
                		"@timestamp":{
                			  "gte": "2020-04-24T00:00:00.000+0000",
                            "lte": "2020-04-27T00:15:00.000+0000"
                		}
                	}
                }

    And it will look like this:

    {
      "query":{
    	"bool": {
                "filter": [{
                    "exists": {
                        "field": "message"
                    }
                },
                {
                	"regexp": {
                		"message":".*restart.*"
                	}
                },{
                	"range":{
                		"@timestamp":{
                			  "gte": "2020-04-24T00:00:00.000+0000",
                              "lte": "2020-04-27T00:00:00.000+0000"
                		}
                	}
                }
                ]
            }
        }
    }
  • Example 2: list all data entries that are from "apache" event module:

    {
    	 "term": {
                "event.module": "apache"
            }
    }

     Aggregations: 

  •  Aggregations defined in "aggs" section from Elasticsearch. In ETL configuration, aggregation text should always starts with "{<aggregation name>" and end with "}"
  • The ETL will automatically add a date histogram to make the the aggregation into bucket. 

Example 1: The average of the value of the field "system.memory.used.pct": "avg_value" is the aggregation value you defined.

{
	"avg_value": {
		"avg": {
			"field": "system.memory.used.pct"
		}
	}
}

The ETL will add the timeframe based on last counter and maximum extraction period, and  duration

{
"aggs": {
		"default_agg": {
			"date_histogram": {
				"field": "@timestamp",
				"fixed_interval" (or "interval" if version is pre 6.8): "15m"
			},
			"aggs": {
				"avg_value": {
					"avg": {
						"field": "system.memory.used.pct"
					}
				}
			}
		}
	}
}


  • Example 2: Average system memory used pct by per host:

    {
    	"entity_name": {
    		"terms": {
    			"field": "host.name"
    		},
    		"aggs": {
    			"avg_value": {
    				"avg": {
    					"field": "system.memory.used.pct"
    				}
    			}
    		}
    	}
    }

    After add the time histogram it will look like this. 

    {
    	"aggs": {
    		"default_agg": {
    			"date_histogram": {
    				"field": "@timestamp",
    				"fixed_interval" (or "interval" if version is pre 6.8): "15m"
    			},
    			"aggs": {
    				"entity_name": {
    					"terms": {
    						"field": "host.name"
    					},
    					"aggs": {
    						"count_value": {
    							"value_count": {
    								"field": "system.memory.used.pct"
    							}
    						}
    					}
    				}
    			}
    		}
    	}
    }
  • Example 3: Count how many restarts in a day from "apache" event module:

In this case, both of the query and aggregation can be used together, or a filter aggregation can be used.

Query text:
{"bool": {
            "filter": [{
                "exists": {
                    "field": "message"
                }
            },
            {
            	"regexp": {
            		"message":".*restart.*"
            	}
            }
            ]
        }
    }


Aggregation Text:
{"entity_name": {"terms": {"field": "event.module" }}}

After the ETL add timestamp on both of the section, it will look like this:


{
	"query": {
		"bool": {
			"filter": [{
					"exists": {
						"field": "message"
					}
				},
				{
					"regexp": {
						"message": ".*graceful.*"
					}
				}, {
					"range": {
						"@timestamp": {
							"gte": "2020-04-24T00:00:00.000+0000",
							"lte": "2020-04-27T00:15:00.000+0000"
						}
					}
				}
			]
		}
	},
	"aggs": {
		"default_agg": {
			"date_histogram": {
				"field": "@timestamp",
				"fixed_interval" (or "interval" if version is pre 6.8): "1d"
			},
			"aggs": {
				"term_count": {
					"terms": {
						"field": "event.module"
					}
				}
			}
		}
	}
}


In both cases search results must meet the following requirements:

  • At least one value field must be present containing the data series to be transferred

  • A timestamp field must be present containing the timestamp

  • The path that each field is presented in the results Json, using dot notation, for example:

    “Source”:{“hits”[“hit”:{“value”:”123”, “timestamp”:”2020-02-01T00:00:00.000Z”, “host”:{name”:”hostname1”}]}, then in this case, if “123” is the results of the metric, then the path of the value is “value”; while if the “hostname1” is needed, then the path is host.name. 

How to retrieve the value data:

Because the value could appear from either Query or Aggregations, the Extractor will ask to select which component it comes from, “_Source.Hit” means it’s from query, while “Aggregation” is from aggregation.

How to select the appropriate dataset

When creating an ETL task that uses the 'Moviri – Elasticsearch Generic Extractor', the first configuration step is associating one or more datasets to the ETL task.

  • Select 'WKLDAT' if the task is going to import 'Business Drivers'

  • Select 'SYSDAT' if the task is going to import 'System' data

  • Select 'APPDAT' if the task is going to import Application configuration This case is not going to be covered in this document; please refer to support for more information if required

 


How to map search query results to BMC Helix Capacity Optimization data model

The most important configuration of the connector is the definition of how to map each time series, extracted from Elasticsearch, to

  • Entities (either Business Drivers or Systems);

  • Metrics (also referred to as resources);

  • Metric Subobjects (also referred to as subresources).

  • Weight (optional)

These mapping are very similar to the ones required by the built-in Generic - Database extractor:

  • Entities:

    • For Business Drivers (WKLDAT dataset) this is equivalent to DS_WKLDNM column, representing "Business driver lookup identifier"

    • For Systems (SYSDAT dataset) this is the equivalent of the DS_SYSNM column, representing the "System lookup identifier"

  • Metrics (also referred to as resources)

    • Equivalent of the OBJNM column

  • Metric Subobjects (also referred to as subresources)

    • Equivalent of the SUBOBJNM column

    • required when the metric is not of type 'GLOBAL', for example for every metric that has a sub-category dimension

  • Weight

    • Equivalent of the WEIGHT column

    • Required when the metric need to be apply factor for calculating average in TSCO

For any time series the connector allows the following mapping options for the three above mentioned dimensions:

  • Entities

    • Use the name of the Elasticsearch search query column containing the series values

    • Input a fixed string

    • Use values from another Elasticsearch search query column to identify entity name

  • Metric (or resource)

    • Select among some proposed generic metrics (applicable to Business Drivers entities)

    • Input a specific metric (Advanced)

  • Metric Subobject (or subresource), when metric is not of type GLOBAL

    • Input a fixed string

    • Use values from another Elasticsearch search query column to identify subobject

Some examples are reported in the remainder of this paragraph to facilitate the application of the above mentioned principles: each example includes the Elasticsearch search query result set, the Elasticsearch-to-BMC Helix Capacity Optimization mappings, the resulting  BMC Helix Capacity Optimization series and a screenshot of relevant configuration properties from the "Elasticsearch – Query and Mapping" ETL task configuration tab.

EXAMPLE 1: mem_util on two services

{
	"query":{
        "bool": {
            "filter": [{
                "exists": {
                    "field": "system.memory.used.pct"
                }
            },{
             "regexp":{"host.os.platform":"win.*|ubuntu.*|centos.*"}
            },
            {
             "range": {
                    "@timestamp": {
                        "gte": "2020-03-01T00:00:00.000+0000",
                        "lte": "2020-03-01T00:15:00.000+0000"
                    }
                }
            }
            ]
        }
    }
}


@timestamp

ip-0-0-0-9

EC2AMAZ-GCPC9NK

2019-06-05T00:00:00.000+0200

85%

65%

2019-06-06T00:00:00.000+0200

86%

87%


In this example, the search query on Elasticsearch returns on different entries. Each entry is a date bucket, which a different host.name, for MEM_UTIL. 

  • Select 'SYSDAT' as dataset, as this data is related to system

  • As 'Entity', we specify to use the name of the field “host.name” (so that system ‘ip-0-0-0-9’ and ‘EC2AMAZ-GCPC9NK’ will be created)

  • As 'Metric', the number of daily logins can be mapped to AD and thus fits the description of the metric.

  • As 'Metric subresource', the selected metric does not require a 'subobject', being a global metric and so we are not required to specify it.


This is the final mapping:

Query Column Alias

 

Value Field position


Value Field

Mapping

 

Resulting Series

Entity

Metric

Subresource

memutil

_source.hit

system.memory.used.pct

  • Entity: use the name of the query column: host.name

  • Metric: "Advanced" -> MEM_UTIL

  • subresource: no need to specify as the metric is global

host.name

MEM_UTIL

GLOBAL



EXAMPLE 2: number of restart

{
	"query": {
		"bool": {
			"filter": [{
					"exists": {
						"field": "message"
					}
				},
				{
					"regexp": {
						"message": ".*restart.*"
					}
				}, {
					"range": {
						"@timestamp": {
							"gte": "2020-04-24T00:00:00.000+0000",
							"lte": "2020-04-27T00:00:00.000+0000"
						}
					}
				}
			]
		}
	},
	"aggs": {
		"default_agg": {
			"date_histogram": {
				"field": "@timestamp",
				"fixed_interval" (or "interval" if version is pre 6.8): "1d"
			},
			"aggs": {
				"entity_name": {
					"terms": {
						"field": "event.module"
					}
				}
			}
		}
	}
}



@timesteamp

entity_name.buckets.key

restart

2020-04-25T00:00:00.000+0000

apache

1

2020-04-26T00:00:00.000+0000

apache

1

2020-04-27T00:00:00.000+0000

apache

1

In this example, the query on Elasticsearch returns the number of daily restart on service "apache" or other service name, a business driver "apache" will be created. The value number of restart, and service all shown in "Aggregations", service name "apache" shown as "entity_name.buckets.key" and count value show as "entity_name.buckets.doc_count". 

  • Select 'WKLDAT' as dataset, as this data is related to Business Drivers

  • As 'Entity', we specify to use the values in the entity name column (so that business drivers  will be created)

  • As 'Metric', the number of daily logins can be mapped to "a count of events over time" and thus fits the description of the metric.

  • As 'Metric subresource', the selected metric does not require a 'subobject', being a global metric and so we are not required to specify it.

This is the final mapping:

Value Alias

Value Field

 

Value position

Mapping

 

Resulting Series

Entity

Metric

Subresource


count

entity_name.buckets.doc_count

 

 

Aggregations


  • Entity: use the field name of service, in this case: entity_name.buckets.key

  • Metric: select "a count of events/items/operations over time"

  • subresource: no need to specify as the metric is global



apache

TOTAL_EVENTS

GLOBAL

 

 

 

EXAMPLE 3: daily number of restart, split by event.module. 

In this example, we want to use service apache as a subresource, And we can give a fixed business service name "srvA"

 

@timesteamp

Fixed service name

entity_name.buckets.key

restart

2020-04-25T00:00:00.000+0000

srvA

apache

1

2020-04-26T00:00:00.000+0000

apache

1

2020-04-27T00:00:00.000+0000

apache

1

This is the mapping:

 

  • Select 'WKLDAT' as dataset, as this data is related to Business Drivers

  • As 'Entity', we specify to use the values in the entity name column (so that business drivers  will be created)

  • As 'Metric', the number of daily logins can be mapped to "a count of events over time" and thus fits the description of the metric.

  • As 'Metric subresource', the selected metric does not require a 'subobject', being a global metric and so we are not required to specify it.

This is the final mapping:

Value Alias

Value Field

 

Value position

Mapping

 

Resulting Series

Entity

Metric

Subresource


count

entity_name.buckets.doc_count

 

 

Aggregations


  • Entity: use the fixed service name "Fixed"

  • Metric: "a number of concurrent/standing/open items split by sub-category""

  • subresource: use the field name of service, in this case: entity_name.buckets.key



"srvA"

BYSET_EVENTS_CURRENT

apache


Note that in this case as new Apache will appear in query results new subresources will be attached to existing  BMC Helix Capacity Optimization Entities.

 

 How to transform value:

The Extractor will ask if to parse data? If so, it will prompt to input a text regex to parse data. Leave black if not using regex.

Then Extractor will ask if the value is a Json Object, in this case, input the value’s path from this Json Object, use dot notation.

Then Extractor will ask if want to apply factor multiplication for the results, let’s say the result is 80%, you can put 0.01 to make it 0.8, as the accepted format for percentage in TSCO. 

Full list of configuration properties

The following are the specific settings valid for connector "Moviri – Elasticsearch Generic Extractor", they are presented in the "Elasticsearch – Query and Mapping" configuration panel.

Property Name

Condition

Type

Required?

Default

Description

Index Regular Express
StringYes
The index name regular expression

Use Query

 

Selection

No

 

Specify if to use a Elasticsearch DSL query, starting from {the query object}

Use Aggregations


Selection

 

No

 

Specify if want to use Elasticsearch Aggregations

Query Text

Use Query="yes"

String

No

 

The query section of elasticsearch, starting from the query object

Aggregation Text

Use Aggregations="Yes"

String

No

 

The aggregation section of elasticsearch, starting from the aggregation object

Value Alias to import


String

Yes


Give a alias to represent the values want to import, no more than 10 character

Timestamp column

 

String

Yes

@timestamp

The name of the result set column containing the records' timestamps


-- Following properties are repeated for each value column specified in "Value Columns to import" –

 

 

 

 

 

Is the value from Source.Hit or Aggregations?
SelectionYesSource.hitPosition of the results, either from source.hit or Aggregations section
The path of the value field (dot notation), don't add aggregations or _source.hit 
StringYes
The value field path, use dot notation

Use <<columnX>> as BCO Entity Name?

 

Selection

Yes

Yes

If set to yes tells the connector to use the value column name as the BMC Helix Capacity Optimization Entity Name

BCO Entity Name

Use <<columnX>> as Entity Name? = "No"

Selection

Yes

 

Specify to use as BMC Helix Capacity OptimizationEntity Name either a "Fixed" string or the values taken from another result set column ("Based on Query Column")

Entity Name Value =

Entity Name="Fixed"

String

(max lenght 28) 

Yes

 

The BMC Helix Capacity Optimization Entity Name

Query Column for Entity Name=

Entity Name=" Based on Query Column"

String

(max lenght 28) 

Yes

 

The query column where to read BMC Helix Capacity Optimization Entity Name

BCO Metric: <<columnX>> represents

 

Selection

Yes

 

Specify which BMC Helix Capacity Optimization metric to use to map the data series. A textual description is provided for commonly used Business Drivers metrics (see Table 1 BMC Helix Capacity Optimization Metrics descriptions)

An option is also present to manually input the BMC Helix Capacity Optimization metric name.

BCO Metric=

Metric: <<columnX>> represents = "Specify Metric (Advanced)"

String

Yes

 

A valid BMC Helix Capacity Optimization metric that represents the data series to be imported

Subobject (sub-category) Name

"Metric: <<columnX>> represents" contains a sub-category or is equal to "Specify Metric (Advanced)"

Selection

Yes

 

Specify to use as subresource name either a "Fixed" string or the values taken from another result set column ("Based on Query Column")

Subobject Name Value =

Subobject Name="Fixed"

String

Yes

 

The subobject (subresource) name

Query Column for Subobject Name=

Subobject Name=" Based on Query Column"

String

(max lenght 28) 

Yes

 

The query column where to read subobject (subresource)

Number of events/operations (weight of the response time)

"Metric: <<columnX>> represents" refers to a response time or is equal to "Specify Metric (Advanced)"

String

Yes

 

Metrics referring to response times (or custom metrics) need a weight to be input in order to more correctly compute averages.

This property specify where to read the weight:

  • "Not specified".

  • "Fixed"

  • "Based on Query Column"

Weight Value =

Number of events/operations (weight of the response time)="Fixed"

Integer

Yes

 

The value for the weight

Query Column for Weight=

Number of events/operations (weight of the response time)=" Based on Query Column"

String

Yes

 

The query column where to read the weight

Do you need to parse the value


String

Yes


If the value need to be parsed

Parse value regex

String

No



If want to parse the value based on regex. For example “apple 1”, if the value should be 1, input regex “/d” to get he result 1. 

Can the value be parse as Json Object

Boolean

Yes

No


If the value can be parsed as Boolean value

Json Path that the value can be used as value

String

No



If want to get a certain field in the value json object, provide a path to the filed, using dot notation, for example: object.object1.object2

Apply factor to the value?

YesNo

No

No


If want to apply factor multiplied to the value

Factor value

String

No



The factor value need to be applied


Description

Corresponding BMC Helix Capacity Optimization Metric

a count of events/items/operations over time

TOTAL_EVENTS

a number of concurrent/standing/open items/customers...

EVENTS_CURRENT

the number of users in a system

USERS_CURRENT

a rate of events/items/operations over time (events/s)

EVENT_RATE

a response time

EVENT_RESPONSE_TIME

a count of events/items/operations over time split by a sub-category

BYSET_EVENTS

a number of concurrent/standing/open items split by a sub-category

BYSET_EVENTS_CURRENT

the number of users in a system split by a sub-category

BYSET_USERS_CURRENT

a rate of events/items/operations over time (events/s) split by a sub-category

BYSET_EVENT_RATE

a response time split by a sub-category

BYSET_RESPONSE_TIME

Specify Metric (Advanced)


  Configuring Elasticsearch Unix and Windows Extractor 

The "Moviri – Elasticsearch Unix-Windows Extractor" connector extracts performance data of servers that is indexed by a Elasticsearch instance in a standard fashion, and load it into BMC Helix Capacity Optimization. It’s indexed by Metricbeat, which is an agent installed on edge machines, who will ship system module data to elasticsearch cluster.

To use Moviri – Elasticsearch Unix-Windows Extractor, the metricbeat needs to be installed on host machines that need to be monitored. System module needs to be enabled from each host machine.

Full list of configuration properties

The following are the specific settings valid for connector "Moviri – Elasticsearch Unix-Windows Extractor", they are presented in the "Elasticsearch – Unix and Windows" configuration tab.

Property Name

Value Type

Required?

Default

Description

Metricbeat Index Regex(Default is metricbeat-*)

String

No

 

Specify the index name regular expression of metricbeat if not the same as default “metricbeat-*”

Import Unix/Linux hosts

Yesno

Yes

 

If import Unix types of hosts

Import Windows hosts

Yesno

Yes

 

If import Windows types of hosts

Select which blacklist or whitelist type you want to use

Selection

No


Specify which types of filter should be used, either SQL like expression or Elasticsearch type of Regexp. SQL like filter can be a semicolon separated list of filters. While Elasticsearch regexp should be a single regular express.

Whitelist hostname regex (Sql like)

String

No

 

A semicolon separated list of hosts that represents the only hosts whose data need to be extracted. Each item of the list can be a SQL like regular expression. Empty means no filter

Blacklist hostname regex (Sql like)

String

No

 

A semicolon separated list of hosts that represents the only hosts whose data need to be excluded. Each item of the list can be a SQL like regular expression. Empty means no filter

Whitelist ElasticSearch regexp

String

No

 

A regular expression of hosts that represents the only hosts whose data need to be extracted. Each item of the regular expression can be a Elasticsearch regexp like regular expression. Empty means no filter. 

Blacklist ElasticSearch regexp

String

No

 

A regular expression of hosts that represents the only hosts whose data need to be excluded. Each item of the regular expression can be a Elasticsearch regexp like regular expression. Empty means no filter

Datasets managed by this integration

An ETL task that uses the 'Moviri – Elasticsearch Unix-Windows Extractor', will only allow you to import the SYSDAT (System data) dataset.

BMC Helix Capacity Optimization entities and metrics

The connector will create a System of type "Generic" for each imported host. The following are the lists of populated metrics for Unix and Windows.

Metrics with * were previously custom, remapped as standard after v 2.3.00

BMC Helix Capacity Optimization Metric

Metricbeat System Module Metrics

FactorStats

OS_TYPE




OS_FAMILY


OS_VER


CPU_UTIL

1 - CPU_IDLE

1

avg

CPU_UTIL_SYSTEM

system.cpu.system.pct

1

avg

CPU_UTIL_USER

system.cpu.user.pct

1

avg

CPU_UTIL_NICE

system.cpu.nice.pct

1

avg

CPU_UTIL_WAIO

system.cpu.iowait.pct

1

avg

CPU_NUM

system.cpu.cores

1

avg

MEM_FREE

system.memory.free

1

avg

MEM_USED

system.memory.used.bytes

1

avg

MEM_UTIL

system.memory.used.pct

1

avg

MEM_REAL_USED

system.memory.actual.used.bytes

1

avg

MEM_REAL_UTIL

system.memory.actual.used.pct

1

avg

SWAP_SPACE_FREE

system.memory.swap.free

1

avg

SWAP_SPACE_TOT

system.memory.swap.total

1

avg

SWAP_SPACE_USED

system.memory.swap.used.bytes

1

avg

SWAP_SPACE_UTIL

system.memory.swap.used.pct

1

avg

NET_IN_BYTE_RATE

system.network.in.bytes

1

avg

NET_IN_ERROR_RATE

system.network.in.errors

1

avg

NET_IN_PKT_RATE

system.network.in.packets

1

avg

NET_OUT_BYTE_RATE

system.network.out.bytes

1

avg

NET_OUT_ERROR_RATE

system.network.out.errors

1

avg

NET_OUT_PKT_RATE

system.network.out.packets

1

avg

UPTIME

system.uptime.duration.ms

1000

avg

TOTAL_FS_FREE

system.filesystem.free

1

avg

TOTAL_FS_SIZE

system.filesystem.total

1

avg

TOTAL_FS_USED

system.filesystem.used.bytes

1

avg

TOTAL_FS_UTIL

system.filesystem.used.pct

1

avg

 



 

Was this page helpful? Yes No Submitting... Thank you

Comments