Default language.

Fine tuning the web infrastructure for the Mid Tier


When you deploy a web application in a web container, you must configure the web container properties to support the web application for the targeted workload. You must also configure properties such as the number of HTTP service threads, JVM heap size, garbage collection model and so on. The guideline for such configuration is specific to the behavior of the web application and is generally provided with the web application.

For the Remedy AR System platform, the Mid Tier installer is packaged with the open source Apache Tomcat, so the topics focus on this servlet container. For other choices, apply the recommendations accordingly.

The out-of-the-box installation of the mid tier and Tomcat with the default configurations works well for most small deployment cases with user load of up to 300 users based on BMC's standard 300 ITSM users workload. For larger hardware and higher workload the out-of-the-box configurations must be revised for better performance and hardware usage. The guideline in this section shows how to revise these configurations.

For reference, the metrics of optimization are the use case times as observed by your browser users.

Optimizing the network for HTTP by setting the HTTP keep-alive option

HTTP keep-alive is officially supported in HTTP 1.1 and by all current web browsers. HTTP keep-alive is necessary because it saves on the overhead of re-negotiating TCP sockets as in the original HTTP 1.0 protocol, especially for Asynchronous Java-XML (AJAX) web application such as the mid tier.

By using HTTP keep-alive, the browser maintains the opened TCP sockets, keeping them alive (active) and re-using them for subsequent browser requests. This greatly enhances the browser’s performance because the constant opening and closing of sockets is no longer needed.

Note

HTTP keep-alive is distinct from TCP keep-alive which is maintained at the OS layer.

The simplest network configuration is with one network segment where the browser is connecting directly to the web server with the browser acting as a client and the web server acting as the server or service provider. In the case of a more complex network with multiple network segments, HTTP keep-alive must be enabled on every network segment for optimal web performance.

For this example, the Mid Tier installer installs the Tomcat web server.

Specific to Tomcat, the following parameters control the HTTP keep-alive:

  • connectionTimeout — Specifies how long the web server will keep the TCP socket alive when it is idle. (In terms of real world usage, the interval is targeted approximately to the browser-user think time.)
  • maxKeepAliveRequests — Specifies how many HTTP requests the browser can submit through one opened socket before the web server closes the socket. (This relates to a security precaution because limiting the number of HTTP requests on an opened TCP socket can help with denial of service attacks by capping the number of HTTP requests per socket.)

A third Tomcat parameter called keepAliveTimeout is specifically used to manage HTTP keep-alive. By default, Tomcat sets the value for this parameter to the value of connectionTimeout if keepAliveTimeout is not set. Since the HTTP keep-alive time needs to be identical to the TCP idle time as given by the connectionTimeout parameter, managing both by using a single parameter is less error prone.

For more information, see http://tomcat.apache.org/tomcat-7.0-doc/config/http.html.

When these two parameters are set, keep-alive is turned on in Tomcat. These two parameters might be optional for other web servers or load balancers and may have different labels. The parameters are transmitted to the browser through the HTTP response headers. If necessary, the browser knows whether to establish a new TCP socket connection when it sends out the next HTTP request.

The following figure shows the HTTP response header for a browser connecting to Tomcat with the connectionTimeout set at 120 seconds and maxKeepAliveRequests set at 5000. (This response header was captured using Fiddler, a free tool from Microsoft and available at http://www.fiddler2.com.) For subsequent requests over the same TCP socket, the HTTP response header shows a decreasing value for the maxKeepAliveRequests count.

HTTP response header
HTTPResposneHeader.gif

The following table lists the recommended HTTP keep-alive values of the Tomcat web server that hosts the mid tier. It also shows the minimum recommendation if the hardware hosting the Tomcat has resource constraints.

Tomcat web server HTTP keep-alive recommendations

Tomcat HTTP keep-alive parameter

Recommended value

connectionTimeout

90000 ms (minimum 60000 ms)

maxKeepAliveRequests

infinite (minimum 5000)

To set these parameters, locate the Connector entry in the <tomcat dir>/conf/server.xml file.

The following example shows the configuration with the connection timeout at 90 seconds and the keep-alive count at infinite:

<Connector URIEncoding="UTF-8" acceptCount="100"
blankconnectionTimeout="90000"
blankmaxHttpHeaderSize="8192" maxKeepAliveRequests="-1"
blankmaxThreads="500" port="80" protocol="HTTP/1.1"
blankredirectPort="8443"/>

The following example shows the configuration with the connection timeout at 60 seconds and the keep-alive count at 5000:

<Connector URIEncoding="UTF-8" acceptCount="100"
blankconnectionTimeout="60000"
blankmaxHttpHeaderSize="8192" maxKeepAliveRequests="5000"
blankmaxThreads="500" port="80" protocol="HTTP/1.1"
blankredirectPort="8443"/>

By turning on HTTP keep-alive, you can expect a transport time gain of approximately 10-30% depending on network latency — the larger the latency, the larger the gain. For SSL deployment, the gain is more dramatic — typically over 30%.

Note

The Tomcat HTTP keep-alive configuration as explained here is for the simplest case where the browser is connected directly to Tomcat. For this case, there is one network segment with the browser being the client and the Tomcat being the server. In a complex network infrastructure with multiple network segments, the recommendation is to configure every network segment for HTTP keep-alive using the same value.

Verifying the keep alive setting

To verify that the keep-alive setting is working in the browser, use a web debugging proxy, such as Fiddler, to capture the exchanges between the browser and the server.

The following figures compare the exchanges when keep-alive is off and when it is on for HTTPS. Observe the frequency of SSL socket establishment when keep-alive is off.

HTTP keep-alive set to off
HTTP keep-alive set to off.gif

HTTP keep-alive set to on
HTTP keep-alive set to on.gif

Another method to verify that the HTTP keep-alive is on is to examine the HTTP response headers. In the following figure, the Transport header of the response is tagged with Connection: Keep-Alive. In this example, a BigIP load balancer was used. It does not support the two optional keep-alive parameters as Tomcat so those values were not transmitted.

HTTP keep-alive is set in the response header
HTTP keep-alive set in response header.gif

Note

For HTTP 1.1 browser clients, the Transport header of the HTTP request is always tagged with Connection: Keep-Alive to indicate to the server that the browser supports HTTP keep-alive. To complete the protocol exchange process, the server must do the following:

  • Keep the TCP socket established by the browser alive and active for the browser to reuse
  • Set the Transport header of the HTTP response to Connection: Keep-Alive to indicate to the browser that the socket is being kept alive so that the browser can reuse the socket. However, though many products set this response header, it is optional as specified by the W3C organization. As long as the HTTP response header does not explicitly specify Connection: Close, all HTTP1.1 browsers will re-use the TCP socket.

For more information, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html and http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.10.

Optimizing the load balancing scheme for the web tier

If you have multiple mid tier instances deployed in your environment (each is supported by a single instance of Tomcat), a load balancer is necessary to load balance the browser load across the mid tier instances. HTTP session affinity is required of the load balancing scheme for Tomcat regardless of clustering.

Two common methods to ensure HTTP session affinity load balancing are source IP binding and cookie insert. BMC recommends the cookie-insert method for the following reasons:

  • The browsers use HTTP protocol, so using cookies for load balancing is most appropriate for the HTTP layer.
  • If the browser clients are Network Address Translated (NAT) or are behind an outbound web proxy, the cookie-insert method still ensures a balanced load distribution. In contrast, in this case, source IP binding does not provide an even load distribution in the NAT because there is only the single IP address of the proxy.

Offloading SSL when using HTTPS (HTTP over SSL)

Most web deployments are done through HTTPS (HTTP over SSL) for security reasons.

Tomcat is not as efficient at handling SSL as a hardware load balancer or the Apache Web Server (The exception is on Solaris and SPARC chipset where the JVM can access the hardware encryption layer through JNI). If you require SSL in your deployment, offload the SSL handling to the load balancer. If you do not have a hardware load balancer, configure the Tomcat instance with the Apache Web Server to handle SSL. Offloading the SSL handling to a single entity also centralizes certificate management to this single entity, making certificate installation and management much easier.

If you have Tomcat handle the SSL layer, adjust the JVM heap allocation accordingly because the SSL handling will increase JVM CPU and JVM heap usage of the JVM hosting the Tomcat instance. See JVM-runtime-analysis.

When deploying a web application over SSL:

  • Secure only the necessary network segments because encryption and decryption is resource intensive.
  • Activate HTTP keep-alive because SSL sockets are expensive for a browser to establish, so reuse them when possible.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*