Mid Tier performance case studies
The following case studies illustrate performance improvements achieved by fine tuning the mid tier:
Sluggish mid tier server behavior
When the mid tier was under normal usage load, an AR System form took several minutes to load in the browser for some users. Some users received an HTTP 500 error.T
An HTTP 500 error is a generic web server error. Unless more specifics can be found, this error is difficult to resolve. Start investigating this in the web server logs, not in the web application logs. Look for any error at or near the time when the problems occur. For this case study, the factor of slow loading of forms plays a large role.
When the service is poor, there is usually a CPU or resource contention. The natural course of action is to monitor the Java Virtual Machine (JVM).
While monitoring the JVM that hosted the Tomcat running the mid tier, it was observed that the JVM heap was over 90% used, so the Garbage Collectors (GC was) often running to reclaim memory. In the Tomcat log, out-of-memory exceptions generated by the JVM were found. These memory exceptions resulted in HTTP 500 errors, a generic web server error that obscured details from potential hackers. Tomcat was found to be hosting an additional web application.
Determine the resource requirement of the additional web application and add this to the requirement for the mid tier. The observed response time for loading an AR System form then returned to an acceptable level, after restarting the JVM with the new heap allocation. In general, isolate each deployment of a web application per tomcat instance to reduce the complexity of troubleshooting web issues.
Poor web application performance under SSL
When a deployment of a mid tier was put under HTTPS, the average response time for browser loading of the same BMC Remedy AR System form increased by over 35%.T
The only factor is the HTTPS protocol. When a problem concerns browser loading time, use a web debugging proxy (such as Fiddler), which provides the details of each request and response (including timing) for an entire use case. Such a tool also captures the same use case under plain HTTP. You can compare the capture of HTTP and HTTPS results to find where the additional time is spent.
Using a web debugging proxy to capture the browser requests and responses, it was observed that an SSL socket was negotiated for every request that the browser sent.
HTTP keep-alive was turned on, with the keep-alive count set at infinite and the maximum connection timeout set at 60 seconds. After restarting the web server, the form loaded 10-15% faster than with plain HTTP. This gain can be expected with keep-alive on (with maximum connection timeout at 60 seconds or greater) than with keep-alive off. In general, SSL requires additional JVM CPU/heap usage. Either off-load SSL to your load-balancer or add resources to compensate.
Poor web application performance on long latency network
When the mid tier was deployed as an internet application (not over VPN) in the United States (U.S.), users in India experienced delays greater than 2 minutes to load an AR System form.
Based on general networking knowledge, this is probably a latency problem. The latency at the TCP layer did not provide accurate information in terms of browser performance because the browser application works at the HTTP layer. Therefore, the latency must be determined at the HTTP layer. To analyze browser performance relating to latency, use an http-ping tool, such as the one from Core Technologies (for Microsoft Windows). Fiddler can also provide HTTP latency, but only when actual requests are made. In contrast, the http-ping tool is more like the ping tool.
For this case study, Fiddler was used to capture the same use case run from the U.S. and from India. Then, a request comparison was made to see the accumulated effect of the HTTP latency.
Using http-ping, the HTTP latency from India was measured. Then, using Fiddler, the HTTP requests and responses from the U.S. and from India were captured. After comparing the cumulative effect of the HTTP latency, the browser cache directive was increased.
The values of
Though the resolution could fix the latency issue, when a user loaded a form, the UI appeared quickly while the data appeared more slowly. This was because the HTML/JS were already cached in the browser. The overall user experience improved because the browser was more responsive and less data was transmitted.