This documentation supports the 9.0 version of BMC Remedy ITSM Deployment.

To view the latest version, select the version from the Product version menu.

Fine tuning the network for HTTP protocol

This topic explains how to optimize the network for HTTP protocol and transport, as well as how to optimize the load balancing scheme for a balanced HTTP load distribution (for network infrastructures with load balancers).

Optimizing the network for HTTP by setting the HTTP keep-alive option

HTTP keep-alive is officially supported in HTTP 1.1 and by all current browsers, such as Mozilla Firefox and Microsoft Internet Explorer versions 6 or later. HTTP keep-alive is necessary because the original HTTP 1.0 protocol requires only the browser to open a TCP socket, send the HTTP request, receive the response, and then close the socket. The HTTP 1.0 protocol works well but can tax the browser’s performance when accessing a dynamic web application, especially an Asynchronous Java-XML (AJAX) web application such as the mid tier.

An AJAX web application relies on many smaller HTTP requests to fulfill a use case. When so many HTTP requests occur, browser performance is impaired without HTTP keep-alive because the browser is constantly opening and closing sockets. This occurs especially when SSL (HTTPS protocol) is used for transport security, or when a single URL has links and references to many other resources (such as images, CSS, and JavaScript) embedded in the HTML content of the target URL.

By using HTTP keep-alive, the browser maintains the opened TCP sockets, keeping them alive (active) and re-using them for subsequent browser requests. This greatly enhances the browser’s performance because the constant opening and closing of sockets is no longer needed.

For web applications that use pipelining, you must activate HTTP keep-alive for pipelining to work. However, the mid tier web application does not use pipelining, so enabling HTTP keep-alive is optional, but will improve the browser performance when accessing the mid tier.


HTTP keep-alive is distinct from TCP keep-alive. HTTP keep-alive is at a higher network layer than the TCP layer.

For the purpose of this HTTP keep-alive configuration example, a simple network configuration with one network segment is used (that is, the browser is connecting directly to the web or application server with the browser acting as a client and the web server acting as the server or service provider). In the case of a more complex network with multiple network segments, HTTP keep-alive must be enabled on every network segment for optimal web performance.

For the example, the mid tier installer installs the Tomcat web server.

Specific to Tomcat, the following parameters control the HTTP keep-alive:

  • connectionTimeout — Specifies how long the web server will keep the TCP socket alive when it is idle. (In terms of real world usage, the interval is targeted approximately to the browser-user think time.)
  • maxKeepAliveRequests — Specifies how many HTTP requests the browser can submit through one opened socket before the web server closes the socket. (This relates to a security precaution because limiting the number of HTTP requests on an opened TCP socket can help with denial of service attacks by capping the number of HTTP requests per socket.)

A third Tomcat parameter called keepAliveTimeout is specifically used to manage HTTP keep-alive. However, by default, Tomcat sets the value for this parameter to the value of connectionTimeout if keepAliveTimeout is not set. Since the HTTP keep-alive time needs to be identical to the TCP idle time as given by the connectionTimeout parameter, managing both by using a single parameter is less error prone. For simplicity, use only the connectionTimeout parameter.

For more information, see

When the two parameters are set, keep-alive is turned on in Tomcat. These two parameters might be optional for other web servers or load balancers and might have different labels. The parameters are transmitted to the browser through the HTTP response headers. If necessary, the browser knows whether to establish a new TCP socket connection when it sends out the next HTTP request.

The following figure shows the HTTP response header for a browser connecting to Tomcat with the connectionTimeout set at 120 seconds and maxKeepAliveRequests set at 5000. (This response header was captured using Fiddler, a free tool from Microsoft and available at For subsequent requests over the same TCP socket, the HTTP response header shows a decreasing value for the maxKeepAliveRequests count.

HTTP response header

The following table lists the recommended HTTP keep-alive values of the Tomcat web server that hosts the mid tier. It also shows the minimum recommendation if the hardware hosting the Tomcat has resource constraints.

Tomcat web server HTTP keep-alive recommendations

Tomcat HTTP keep-alive parameter

Recommended value


90000 ms (minimum 60000 ms)


infinite (minimum 5000)

To set these parameters, locate the Connector entry in the <tomcat dir>/conf/server.xml file.

The following example shows the configuration with the connection timeout at 90 seconds and the keep-alive count at infinite:

<Connector URIEncoding="UTF-8" acceptCount="100"
blankmaxHttpHeaderSize="8192" maxKeepAliveRequests="-1"
blankmaxThreads="500" port="80" protocol="HTTP/1.1"

The following example shows the configuration with the connection timeout at 60 seconds and the keep-alive count at 5000:

<Connector URIEncoding="UTF-8" acceptCount="100"
blankmaxHttpHeaderSize="8192" maxKeepAliveRequests="5000"
blankmaxThreads="500" port="80" protocol="HTTP/1.1"

By turning on HTTP keep-alive, you can expect a transport time gain of approximately 10-30% depending on network latency — the larger the latency, the larger the gain. For SSL deployment, the gain is more dramatic — typically over 30%.


The Tomcat HTTP keep-alive configuration as explained here is for the simplest case where the browser is connected directly to Tomcat. For this case, there is one network segment with the browser being the client and the Tomcat being the server. In a complex network infrastructure with multiple network segments, the recommendation is to configure every network segment for HTTP keep-alive.

To verify that the keep-alive setting is working in the browser, use a web debugging proxy, such as Fiddler, to capture the exchanges between the browser and the server.

The following figures compare the exchanges when keep-alive is off and when it is on for HTTPS. Observe the frequency of SSL socket establishment when keep-alive is off.

HTTP keep-alive set to off

HTTP keep-alive set to on

Another method to verify that the HTTP keep-alive is on is to examine the HTTP response headers. In the following figure, the Transport header of the response is tagged with Connection: Keep-Alive. In this example, a BigIP load balancer was used, and it does not support the two optional keep-alive parameters as Tomcat.

HTTP keep-alive is set in the response header


For HTTP 1.1 browser clients, the Transport header of the HTTP request is always tagged with Connection: Keep-Alive to indicate to the server that it supports HTTP keep-alive. To complete the protocol exchange process, the server must do the following:

  • Keep the TCP socket established by the browser alive and active for the browser to reuse
  • Set the Transport header of the response to Connection: Keep-Alive to indicate to the browser that the socket is being kept alive so that the browser can reuse the socket. However, though many products set this response header, it is optional as specified by the W3C organization. As long as the HTTP response header does not explicitly specify Connection: Close, all HTTP1.1 browsers will re-use the TCP socket.

For more information, see and

Optimizing the load balancing scheme for the web tier

If you have multiple mid tier instances deployed in your environment (each is supported by a single instance of Tomcat), a load balancer is necessary to load balance the browser load across the mid tier instances. Unless you have set up your mid tier instances as a web cluster (with a mechanism for sharing HTTP sessions across every web application instance of your cluster), HTTP session affinity is required of the load balancing scheme.

Two common methods to ensure HTTP session affinity load balancing are source IP binding and cookie insert. BMC recommends the cookie-insert method for the following reasons:

  • The browsers use HTTP protocol, so using cookies for load balancing is most appropriate for the HTTP layer.
  • If the browser clients are Network Address Translated (NAT) or are behind an outbound web proxy, the cookie-insert method still ensures a balanced load distribution. In contrast, in this case, source IP binding does not provide an even load distribution in the NAT because there is only the single IP address of the proxy.

Off-loading SSL when using HTTPS (HTTP over SSL)

Most web deployments are done through HTTPS (HTTP over SSL) for security reasons.

Tomcat is not as efficient at handling SSL as a hardware load balancer or the Apache Web Server. If you require SSL in your deployment, offload the SSL handling to the load balancer. If you do not have a hardware load balancer, configure the Tomcat instance with the Apache Web Server to handle SSL. Offloading the SSL handling to a single entity also centralizes certificate management to this single entity, making certificate installation and management much easier.

If you have Tomcat handle the SSL layer, adjust the JVM heap allocation accordingly because the SSL handling will increase JVM CPU and JVM heap usage of the JVM hosting the Tomcat instance. See JVM runtime analysis.

When deploying a web application over SSL:

  • Secure only the necessary network segments because encryption and decryption is resource intensive.
  • Activate HTTP keep-alive because SSL sockets are expensive for a browser to establish, so reuse them when possible.
Was this page helpful? Yes No Submitting... Thank you