This documentation supports the 18.05 version of Remedy ITSM Deployment.

To view the latest version, select the version from the Product version menu.



JVM runtime analysis

This topic was edited by a BMC Contributor and has not been approved.  More information.

At the Java Virtual Machine (JVM) layer, the following parameters and JVM utilization factors can drastically affect the performance of a web application:

A broad discussion of each topic is to follow below. However, a summary for the recommended values are found here: https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=kA014000000h9kqCAA&type=FAQ

JVM MetaspaceSize setting

The MetaspaceSize (PermSize for pre-JDK1.8) partition of the JVM memory is reserved for the metaspace data and holds all of the reflective data for the JVM (such as class, methods definitions, and associated data) and constants (such as intern strings). This allocation is completely separate and independent from the JVM heap size setting.

To optimize the performance of applications that dynamically load and unload many classes such as the mid tier, increase the size from the default maximum allocation of 83MB for pre-JDK1.8. For JDK1.8+, the MetaspaceSize is unlimited and it's best to cap the size in case of unexpected behavior.

JVM Garbage Collector (GC) selections

Java garbage collection is a complex topic that may cause many disagreements as a single recommendation may work for most deployments but might not be effective in other cases. We limit our GC recommendation to empirical data based on our benchmark results.

There are basically 4 types of collectors: Serial, Parallel/Throughput, CMS/low-pause, and G1GC. For our benchmark result, we created a standardize web workload involving key ITSM use cases. We then conducted the same exact benchmark varying the GC selection. We collected he sum of the average use case times and recommend the GC selection based on the lowest sum, i.e., the best overall total time of each use case's average time.

Workload detail

Detail information on the ITSM use cases and the workload is found here: https://docs.bmc.com/docs/display/public/itsm90/Workload+details

JVM heap size setting

The recommended JVM heap settings as specified in the above link is based on empirical data collected from running various workload size benchmarks and is applicable to the version as specified in the link.

Other factors that can affect the JVM heap size usage:

  • The mid tier version — Later versions use more memory to support additional features
  • The web user load — The rate of the HTTP requests that users generate
  • The number of concurrent users logged on to the mid tier — The number of HTTP session objects alive in the mid tier
  • The use cases supported — A use case that retrieves more data from the AR System server (such as reporting) creates more heap usage on the mid tier

Use the following rules:

  • If Tomcat is handling SSL, allocate more memory than without SSL because SSL handling requires extra JVM heap allocation. The rule of thumb is 20% extra. (This depends on the number of concurrent users and the rate of simultaneous HTTP requests generated.)
  • In general, to support more concurrent users per mid tier instance, add 1 MB per user. This is based on empirical approximation; the exact value depends on which use case is being executed
  • If you allocated extra JVM heap to leverage more memory to the ehcache in the mid tier, access the <mid tier>/arsys/shared/config/config_cache_adv.jsp (it’s a hidden JSP) to get the average size for each object of the cache categories, then increase the maximum number of object for the desired cache category by adjusting the category factor. For example, OOTB value of arsystem.ehcache.referenceMaxElementsInMemory=3500 and arsystem.ehcache.referenceMaxElementsInMemoryWeight.activeLinks=4.904 so the maximum number of active link object in memory cache is 3500*4.904 = 17164. Assuming the average size of your active link is 5KB (this value is computed based on the info provided by config_cache_adv.jsp and is dependent on the AR Applications that's deployed on your system), leveraging an additional 100MB to cache the active links in mid tier memory means increasing 17164 to 37164 or changing the weight factor to arsystem.ehcache.referenceMaxElementsInMemoryWeight.activeLinks=10.618. The increase should be done based on the total object count (given in the config_cache_adv.jsp page) per category on disk versus the maximum object count capped in memory as determined by the weight factor as shown above. In general, the more objects cached in memory, the better the performance in terms of producing the UI (html/JS) content.

JVM CPU utilization

The recommended hardware sizing as specified in the above link is based on empirical data collected from running various workload size benchmarks and is a good starting point for hardware sizing. You can use jvisualvm to monitor your target JVM CPU usage to see the behavior for your particular deployment. Monitor over an extended interval to capture both peak and off-peak usage.

When reading the JVM CPU usage graph in jvisualvm, rely only on the trend as the visual graphing artifacts (interpolation/granularity) may be misleading.

For example, the following 2 graphs are of identical data but displayed in 2 different versions of jvisualvm:

For JVM CPU utilization (based on your JVM monitored data for the interval of interest), CPU usage should not be above the 85% utilization for extended periods. In our lab tests, on an x86 Intel-based chip set, the average use case times degrade when jvisualvm shows CPU usage above the 80% level.

In real deployment and under normal load, the JVM CPU should be underutilized so that it has sufficient headroom for surges such as users logging in at the start of a workday.

The following shows various graphs of JVM CPU utilization and behavior categorization. You can use them as a guideline for your CPU usage pattern.

Ideal nominal load: Very light usage with much headroom for load surges

Light load: Some usage with headroom for load surges

Normal load with constant usage: Can handle some load surges

Heavy load with heavy constant usage: Cannot handle load surges

The last graph of Heavy Load with Heavy Constant Usage indicates that the system is at its maximum handling capacity and will need vertical or horizontal scaling.

JVM heap utilization

Use the recommended guideline provided in the link above to set your JVM heap size; however, depending on your particular deployment, adjust your JVM heap allocation to target JVM heap usage at about 60 - 65% of total heap. The graph to provide a visual guide to the usage range can be obtained by using jvisualvm to monitor the target JVM. Always monitor over 24 hours to capture peak and off-peak usage. And rely only on the graph trend when reading the graph as graphing artifacts (interpolation/granularity) may provide a misleading picture.

For example, the following 2 graphs are of the same data but displayed using 2 different version of jvisualvm.

Other JVM settings

To address the 64-bit JVM performance degradation, Oracle provides a hybrid mode for the 64-bit JVM to reclaim some of the performance overhead both in CPU and heap usage. This is the 64-bit JVM hybrid mode. Use this mode if your memory requirement is less than 32 GB because this is the maximum heap size possible in the hybrid mode. To activate this hybrid mode, add the following argument to your JVM startup arguments:

-XX:+UseCompressedOops

64-bit JVM hybrid mode

For more information about the hybrid mode, see https://wikis.oracle.com/display/HotSpotInternals/CompressedOops.

Other settings related to JVM heap:

Set the following parameter if you want create heap dump whenever you encounter out of memory error:

-XX:+HeapDumpOnOutOfMemoryError

Set the following parameter to specify the path of the heap dump directory:

-XX:HeapDumpPath=<installfolder>/Logs
Was this page helpful? Yes No Submitting... Thank you

Comments