This documentation supports the 9.1 version of BMC Remedy ITSM Deployment.

To view the latest version, select the version from the Product version menu.

Allocating AR System server resources

This topic contains the following information:

To optimize performance of online transactions, deploy one or more backend servers explicitly for the purpose of integration services and move resource-intensive batch or background processes to this server. Response times for online user transactions should always be the priority. By moving all other components to an integration server, online transactions do not have to contend for CPU and memory resources with other components.

If CPU on the BMC Remedy AR System server becomes a limiting factor, consider adding an additional AR System server. An AR System server can scale easily with a server group. After you have created a server group, you will need to re-balance the load from the mid tier to each AR System server in the server group.


When tuning the AR System server, consider these facts:

  • It is a multithreaded application
  • Its threads may need to be configured based on the hardware

If the thread count is set too low, the AR System server will have low CPU use, poor throughput, and potentially poor response time under high loads. If the thread count is set too high, unnecessary thread administration can result. Instead, start with fast threads set to a minimum of 2 times and a maximum of 3 times the number of CPU cores and list threads set to a minimum of 3 times and a maximum of 5 times the number of CPU cores. Then, fine tune further by conducting load tests and use jvisualvm to monitor the fast/list thread CPU usage during your load. (See JVM monitoring for using jvisualvm to monitor the AR Server java process.) For example, a two-CPU box might have a 4/6 (min/max) threads for fast and 6/10 (min/max) threads for list.

The AR System performance recommendations are based on physical and not logical cores. Even though is some cases hyperthreading may result in an overall performance gain, for ARSystem, the gain is not the same magnitude as with physical cores. For that reason, we only use physical cores in our sizing calculations.

Consider these suggestions as a starting point. Since there are several variable factors (including different hardware, CPU architecture, and CPU speed), benchmark your environment to determine its optimum settings for fast and list threads.


As mentioned in Monitoring Remedy performance and capacity, make sure that the system has enough physical memory to avoid swapping. Use OS tools such as Performance Monitor (Microsoft Windows) or jvisualvm (shipped with Java JDK) to monitor the server. When the OS runs low on memory, it starts paging, which severely impacts performance. By monitoring process memory usage and the amount of paging, you can identify problems.

You might configure the amount of the memory used by the BMC Remedy AR System server by setting the values (in bytes) of the jvm.minimum.heap.size and jvm.maximum.heap.size flags in the arserver.config file.

Configuration Summary

Visit KA000114508 for a summary of recommended configurations.

You might also configure or limit the amount of memory a component can use. For example, you might set the MaxHeap for Java applications (or configure it to not use as much memory by using fewer threads), perform smaller queries, or chunk data.

The following configuration parameter logs large memory allocations:

  • Set the Large-Result-Logging-Threshold <bytes> parameter
    Add this parameter directly to the Centralized Configuration and then enable thread logging. This parameter writes to the arthread log file each time an API call requests memory larger than the threshold. This is useful for monitoring users that are performing large queries that cause AR System server memory growth. 

    Best Practice

    As of version 9.1 service pack 2, the out-of-the-box value is 1000000. We recommend that you do not change this value unless your need differs.

Was this page helpful? Yes No Submitting... Thank you