Traffic capture and tapping points for BMC Real End User Experience Monitoring Software Edition


If you deploy the Cloud Probe on a dedicated system, connect this system to a network tap that is receiving the traffic you want to monitor. The Cloud Probe uses the network tap to collect the network traffic activity, and send the traffic data to a Real User Collector. Each Cloud Probe that is installed on the system can send data to only one Real User Collector, but each Collector can receive data from several Cloud Probes.  

The following information describes the recommended methods for using different traffic capture methods to enable dedicated system based end-user experience monitoring:

Traffic capture methods

If you install the Cloud Probe on a dedicated Windows or Linux computer in which you have enabled the Promiscuous Mode/Accept Mode on the network interface, you can receive network traffic from a network tap, a mirror port, or a mirror pool.

Network tap

A network tap copies network traffic and sends it the computer in which the Cloud Probe is installed. This is a dedicated device that minimizes the chances of network traffic not being copied to the Cloud Probe.

Using a network trap is the recommended method for monitoring the network traffic because the network tap is a passive device that does not interrupt network traffic, or the functioning of your application, if it malfunctions.

If you use a smart tap (from companies such Gigamon, Net Optics, Network Critical, and Network Instruments), you can filter on IP addresses and port numbers to reduce the amount of collected traffic. And since BMC Real End User Experience Monitoring only monitors HTTP and HTTPS traffic, you can configure a smart tap to copy only traffic on the default ports for HTTP (80) and HTTPS (443). For more information about the traffic load capabilities of each Real User Collector, see Sizing-Real-User-Collector-instances.

Although a network tap is purpose-built for copying traffic, installing or replacing a tap forces you to take a network segment offline for a period of time.

Mirror port

You can configure a mirror port on a switch to copy traffic, such as Switched Port Analyzer (SPAN) or Encapsulated Remote SPAN (ERSPAN) port for Cisco systems, or a Roving Analysis Port (RAP) port on 3com devices.

In most cases, a switch has a spare port that you can use as a mirror port. However, the device considers mirroring a secondary function. If the device becomes overloaded, it might suspend mirroring, and the Cloud Probe will experience packet drops.

Note

  • You must be sure that the mirror port is copying traffic both to and from the application (bidirectional).
  • If you deploy a Real User Collector on a Hyper-V system, you must use ERSPAN with a Generic Routing Encapsulation (GRE) tunnel to encapsulate and carry the traffic. For more information, see Cloud-Probe-deployment-use-cases.

Mirror pool

You can invoke a mirror pool on a load balancer, which can be configured to filter traffic.

In most cases, a load balancer already has a spare port that you can set up as a mirror port. However, the device considers mirroring a secondary function. If the device becomes overloaded, it might suspend mirroring, and the Real User Collector will experience packet drops.

Tapping point best practices and the effect on traffic and metrics

As shown in the following illustration, you can place the tapping point for the network traffic in front of or behind the load balancer, which decrypts the SSL traffic for your web servers.

Tapping point placement

Tapping_points.png

The following table describes how the tapping point's placement impacts how the Cloud Probe collects data traffic, and the impact it has on the metrics reported by end-user experience monitoring.

Tapping point location

Effect on traffic data collection

Secure traffic requirements

Effect on metrics

1

In front of the load balancer

Placing the tapping point in front of the load balancer is the recommended method.

This placement provides the Cloud Probe with the best visibility of end-user traffic.

Data collected in front of the load balancer is as close to the edge of your network as possible. You can consider all time spent after this point in the network as time the user spent in your infrastructure, including the load balancer, which is considered host latency. The time spent in the network before the load balancer is considered network latency. For a definition of the these latency metrics, seemetrics.

To see which server responded to a particular request, this information will need to be sent through the load balancer. If you tap in front of the load balancer, the IP address of the web server handling the request will not be visible. To gain this visibility, you must add an HTTP cookie, or an HTTP header, to the load balancer or the web server so it can be parsed.

To monitor HTTPS traffic, if the load balancer or web servers are performing encryption and decryption, you must upload a copy of SSL private keys to the Real User Collector.

To report back all metrics, including SSL time, only tap the network traffic at one point.

2

Behind the load balancer

It is possible to place the tapping point behind the load balancer.

This placement means that you must tap incoming and outgoing traffic in the same place. This reduces the visibility of end-user traffic, particularly between the end user and the load balancer.

 

To monitor HTTPS traffic, if encryption and decryption occurs on the load balancer, you do not need to upload a copy of the SSL private keys to the Real User Collector.

In some cases for SSL decryption acceleration, the load balancer will decrypt the data on behalf of the servers. The load balancer might also be the endpoint for the request and re-request of data on behalf of the end-user for increased security.

Some load balancers will terminate the TCP session from the web browser and open a new browser to the web servers. Therefore, the network latency metric is close to the 0 value.

Data fed from this point is closer to your servers. This means the network time metric also includes some latencies contributed from your infrastructure.

If the load balancer does the decryption on behalf of the servers, the SSL latency metric is lost.