This documentation applies to the 8.1 version of Remedy IT Service Management Suite, which is in "End of Version Support." You will not be able to leave comments.

To view the latest version, select the version from the Product version menu.

Sizing and deployment considerations

This topic contains information and terminology to help you determine hardware sizing and deployment requirements for your organization's usage.

Use cases and value paths

Before sizing can be determined, you must understand the business and organizational requirements and use cases for each solution. Number of users, transaction types and quantity, and customizations (for BMC Service Request Management and BMC Atrium Orchestrator) impact sizing. BMC Remedy ITSM value paths and use case information is located in the Key concepts topic.

Following are some examples of use cases and factors that can impact sizing:

  • Large amounts of CMDB data
  • Software license management
  • Heavy incident volume from event management
  • Knowledge management and Full Text Search (FTS)
  • Large numbers of BMC Service Request Management or BMC Atrium Orchestrator customizations
  • Integrations

Note

The following information is for general purposes and is not specific to any particular value path.

Sizing the BMC Atrium CMDB and BMC Remedy ITSM environment

The BMC Remedy AR System and mid tier servers scale better horizontally than they do vertically. Although adding more resources, such as CPUs and memory, to a server might be tempting, testing has shown that there is a point of diminishing return on performance in doing so. Scaling horizontally by adding more servers to a server group results in better and more consistent performance than scaling vertically.

You can configure BMC Remedy ITSM in a load-balanced environment to direct user interactions to the load-balanced BMC Remedy AR System mid tier and BMC Remedy AR System servers while assigning dedicated non-user functions and responsibilities to specific instances of BMC Remedy AR System server (see references to back end servers in Sizing baselines. For example, escalations, integrations, reconciliation, and notifications can be separated from "user" servers to ensure that these services do not impact user activity. This is a reliable way to provide consistent high performance services as well as scale the installation safely.

Similar to dedicated BMC AR System servers for non-user related functions, additional dedicated servers at the web tier should be considered if web services are implemented for BMC Atrium CMDB or BMC Remedy ITSM.

In a BMC Remedy AR System server group, the host computers do not all require the same processing power and memory. For example, the back-end servers can use servers with more or less processing power and memory.

Virtualization

Current sizing is based on CPU cores from physical hardware, not virtualization. With hypervisor-based x86/x64 systems like VMware ESX Server using multicore multithreaded processors, be aware that hyperthreading can increase the number of virtual CPUs displayed and made available to the ESX Server. Assuming that a vCPU has the same processing power as a CPU core could negatively impact capacity planning.

Ensure that the following virtualization features are configured properly. (Consult your virtualization expert for details.)

  • Priorities related to resource pools (Do the BMC applications being deployed in a virtual environment have the proper priority?)
  • Distributed Resource Scheduling (DRS)
  • VMware vMotion
  • Affinity

Note

Previous guidance for BMC Remedy ITSM 7.6.x is located in Sizing and deployment considerations.

Using a static formula for concurrent users for sizing and some memory recommendations might be insufficient when influenced by more factors than were used to determine the BMC baselines in these topics. The most current baseline sizing numbers for BMC Remedy ITSM 8.1 are located at Sizing baselines. These baselines come from the following sources:

BMC Remedy Mid Tier servers and web infrastructure

Use at least two BMC Remedy Mid Tier servers to provide high availability and failover capability.

Deploy the same number of mid tier servers as the number of "user loaded" BMC Remedy AR System servers. This assumes the AR System servers and mid tier servers are sized appropriately.

BMC Remedy AR System server

Using a formula of concurrent users to CPU cores is perhaps not the most efficient way to achieve peak performance for BMC Remedy ITSM. Instead, refer to the planning spreadsheet and sizing baselines. It is also important to consider your organization's specific planned use cases.

Installing BMC Knowledge Management and using FTS reduces the capacity of BMC Remedy AR System.

BMC Remedy AR System server operates as 64-bit multithreaded processes and can support up to 3,000 concurrent users per server. In the BSM performance and scalability tests, the memory footprint of the BMC Remedy AR System server's and mid tier server's 3,000 concurrent users was stable at about 2 to 3 GB, but in practice the memory footprint can grow to 10 GB or more. The memory sizes listed in the sizing baselines* are still the correct starting points.

Previously BMC has advised that 60 or 70 threads be considered an upper limit for all threads in total. The BSM performance white paper shows that significantly higher thread settings can be preferable. The correct way to tune threads is to use API Thread % Idle Time = 0s values from API log analysis to see what needs to be increased or can be decreased. If you can restart the server with thread logging on, you can observe when new threads are started up.

Estimating the number of concurrent users

The number of concurrent users depends on the total end-user population and on the way the system is used. For example, with the BMC Service Request Management application, if only a few services are deployed, the number of concurrent users is low; if a large number of services are deployed, the system experiences a higher load. The number of concurrent users is best established by looking at the load on the existing system that is being replaced or upgraded.

In the absence of this data, BMC suggests the following guidelines for creating an initial estimate on the number of concurrent users. Note that these are estimates rather than accurate numbers.

Type of user

Number of concurrent users

Service Desk

100 percent of total population

Change Management

10 percent of total population

Self Service

1 – 2 percent of total population

Administrators

100 percent of total population

BMC Analytics users

10 – 20 percent of total population

For example, if you have an IT staff of 900 employees, all of whom use the Change Management system, you can assume that about 90 of them are concurrent at any time. Similarly, if you have 100,000 employees who are using BMC Service Request Management, the concurrent count can vary from 1,000 (which is 1 percent) to 2,000 (which is 2 percent). This high variance must be evaluated based on your plans to roll out services. If you intend to host many services on BMC Service Request Management, use the high count. If you plan to limit the number of services, use a smaller count.

Scalability for BMC Knowledge Management

BMC Knowledge Management 8.1 uses the FTS service provided by BMC Remedy AR System for indexing. This requires the indexer to run on one of the AR System server computers in the server group. Because this indexer consumes CPU, run it on a back end AR System server (which is typically not serving online user traffic). The index should be local and not on shared storage (NAS/SAN).

In addition to the indexer, each of the AR System servers in the server group has a service that reads information from the common index. This can also consume CPU cycles, thereby reducing the performance of the AR System Server where the indexer resides. It has been observed that the larger the size of the index, the greater the effect on scalability. More information about this is available from the BMC Deployment Architecture team and will be published shortly.

Databases

Physical servers are required for the database servers. BMC does not recommend running databases on virtualized hardware.

You might choose to host product installations on a single physical server or on separate servers or tiers. The decision is based on multiple factors, including but to limited to, hardware and licensing costs, performance, administration and maintenance efforts, and security requirements.

BMC does not recommend placing a firewall between the AR System servers and the databases. The extra processing time that a firewall takes to analyze each packet causes a delay. This delay, with the additional unnecessary network hop, creates a transaction response time that will be unacceptable to most users.

BMC Analytics for BSM sizing and scalability

BMC Analytics for BSM is based on SAP BusinessObjects Enterprise XI 3.1. For deployment information, see the SAP BusinessObjects Enterprise XI 3.1 Deployment Planning Guide.

Sizing factors

The BMC Analytics for BSM product is based completely on SAP BusinessObjects Enterprise, and so the task of sizing the installation is sizing the SAP BusinessObjects Enterprise server and software. The guidelines in this documentation assume that the installed system is being used for BMC Analytics for BSM only.

To size the BMC Analytics for BSM installation, estimate the following numbers:

  • Potential users (named users) — The number of users who are able to log on to the system. This number is the easiest to calculate because it represents the total population of users who have the ability to access the SAP BusinessObjects Enterprise environment.
  • Concurrent active users — An estimate of the number of users who are expected to be concurrently logged on and actively interacting with the system (clicking on folders, viewing reports, scheduling, and so on). Do not include users who are logged on but inactive in this estimate.

    According to SAP BusinessObjects Enterprise, many customers find that their concurrency ratios average from 10 percent to 20 percent of their total potential user base. For example, with 1,000 potential users, use an estimate of 100 to 200 concurrent active users. This number can vary significantly depending on the nature and breadth of the deployment, but it is a reasonable general guideline for planning purposes.

Database sizing

BMC Analytics for BSM can use the BMC Remedy AR System database, but this might affect the performance of the database server. BMC recommends that you use a separate reporting instance of the BMC Remedy AR System database to support the BMC Analytics for BSM environment.

Hardware requirements

BMC recommends the following hardware requirements for the BMC Analytics for BSM environment:

  • Sizing for the Central Management Server:
    • One CPU for every 500 concurrent active users
    • One CMS service for every 600 – 700 concurrent active users
    • 4 GB of RAM for each CMS service
  • Sizing for the Web Intelligence Report Server:
    • One processor for 25 – 40 concurrent active users
    • One Web Intelligence Report Server service for each processor
    • 4 GB of RAM per Web Intelligence Report Server service

BMC Dashboards for BSM sizing and scalability

This section provides sizing information for the BMC Dashboards for BSM environment.

Sizing factors for BMC Dashboards for BSM

Sizing factors for BMC Dashboards for BSM are as follows:

  • Number of concurrent users
  • CI/request ratio (number of CIs per incident, change, problem, service requests, or service level management ticket)

The sizing assumes an average report file size of 500 KB.

Hardware requirements

This section describes hardware sizing for a single-server deployment and a dual-server deployment. In a single-server deployment, the Dashboard server and the Data Integration Layer (DIL) are installed on the same computer. In a dual-server deployment, they are installed on separate computers.

Single-server deployment

Small (<25)

Medium (25 – 50)

Large (>50)

CPU 1

2 x 2.0 GHz+

4 x 2.0 GHz+

4+ x 2.0 GHz+

RAM

4 GB

4 – 8 GB

8+ GB

Disk space

40 GB

100 GB

200 GB

^ 1 These are server class CPUs, not just a single core.^

Dual-server deployment

Small

Medium

Large

CPU 1

Dashboard: 1 x 2.0 GHz+
DIL: 1 x 2.0 GHZ+

Dashboard: 2 x 2.0 GHz+
DIL: 2 x 2.0 GHZ+

Dashboard: 2 x 2.0 GHz+
DIL: 2 x 2.0 GHz+

RAM

Dashboard: 2 GB
DIL: 2 GB

Dashboard: 4 GB
DIL: 4 GB

Dashboard: 4+ GB
DIL: 4+ GB

Disk space

Dashboard: 20 GB
DIL: 20 GB

Dashboard: 50 GB
DIL: 50 GB

Dashboard: 100 GB
DIL: 100 GB

^ 1 These are server class CPUs, not just a single core.^

BMC Atrium Orchestrator sizing and scalability

BMC Atrium Orchestrator is a load-balancing cluster architecture that can be considered an active/active high-availability cluster, where all available servers or peers process requests. The term used to describe the BMC Atrium Orchestrator load-balancing cluster architecture is grid, which is used throughout the product documentation and thus used here.

A number of different peers can be used for setting up the grid. The only peer that is required in the grid is the Configuration Distribution Peer (CDP), with others added for high availability or for scaling up.

Peers

The following table differentiates the peers and the services that they provide:

Peer type

Description

AP

The Activity Peer (AP) is the most fundamental grid component used by BMC Atrium Orchestrator. The AP is the process that executes workflows and hosts external application adapters, providing both the core workflow engine and a web container.

CDP

In addition to offering the same types of services and capabilities found in the AP, the CDP provides a central point of administrative control and content distribution for all distributed components in a grid. The CDP is a required component, and there is typically only one CDP per grid servicing a number of APs. Furthermore, as a centralized configuration repository, the CDP is responsible for storing and distributing all configuration and content for the entire grid.

HA-CDP

The High Availability Configuration Data Peer (HA-CDP) is a mirrored instance of a primary CDP and provides active/active high-availability capabilities for the grid in which it is installed and operating. It is considered active/active because it is not idle and participates as any normal AP would in workflow execution.

Failover

With multiple peers available to handle processing requests, single failures are not likely to negatively affect throughput and reliability. With grid peers distributed to various hosts, an entire computer can fail without resulting in application downtime. Processing requests can be routed to other peers if one peer fails. With certain limitations, BMC Atrium Orchestrator allows for maintenance of peer hosts without stopping the entire functionality of the grid itself.

Failover in BMC Atrium Orchestrator is primarily achieved through a technique called full replication. This occurs when a cluster replicates all data to all peers. Full replication ensures that if a peer fails, that peer's workload is reassigned to a functioning peer where a copy of the failed peer's data exists. However, the disadvantage of full replication is that as more peers are added to the grid, more data must be replicated throughout. Likewise, events regarding the data, such as updates and deletions, must be published to more peers, resulting in more traffic on the grid. Thus the scalability of the grid can be defined as an inverse function of the size and frequency of these data events. This is why it is important not to mistake BMC Atrium Orchestrator as the functional equivalent of Data Center Automation computing or a computer grid. Carefully consider the number of peers to include in any grid.

Grids

The grid's ability to manage workloads efficiently, and thus overall throughput, is largely a function of the grid's chosen scalability model. In computer clusters, there are two models for scalability, vertical and horizontal. With proper planning, either approach can be an effective means of scaling the BMC Atrium Orchestrator grid, depending on operational requirements.

Vertical scaling

In vertical scaling, multiple grid peers run on the same physical computer, which might enable the computer's processing power to be more efficiently allocated. Generally, these computers are large SMP or multicore servers with a large amount of RAM and disk space. Even if a single Java Virtual Machine (JVM) can fully utilize the processing power of such a computer, more than one peer on the computer might be required for other reasons, such as using vertical scaling for software failover. If a JVM reaches the limit of the peer's per-process virtual memory heap (or if there is some similar problem), increasing the number of peers on the same host might result in increased grid capacity depending on the configuration used.

Horizontal scaling

In horizontal scaling, peers are created on multiple physical computers. This enables a single logical grid to run on several computers while still presenting a single system image, making the most effective use of the resources of a distributed computing environment. Horizontal scaling can be quite effective in environments where commodity computers are in abundance. Processing requests that overwhelm a single computer can be distributed over several computers in the grid.

Failover is another benefit of horizontal scaling. If a computer becomes unavailable, its workload can be routed to other computers hosting peers. Horizontal scaling can handle peer-process failure and resource failure with negligible impact to grid performance.

This version of the documentation is no longer supported. However, the documentation is available for your convenience. You will not be able to leave comments.

Comments

  1. Sumeet Das

    The BSM performance white paper shows that significantly higher thread settings can be preferable.


    The BMC performance white paper link is broken.

    Jan 14, 2015 11:35
    1. Bruce Cane

      Thanks. I'll look into this. It looks like an attachment to the page has been removed.

      Jan 15, 2015 09:37
  2. Colin Rolls

    where is the 9.1 version of this page!?

    Apr 14, 2016 05:30
    1. Bhakti Paranjpe

      Hello Colin,

      The pages are currently under review and will be made available soon.

      Thanks,
      Bhakti

      Apr 15, 2016 09:56
  3. Colin Rolls

    "More information about this is available from the BMC Deployment Architecture team and will be published shortly." - and when is this going to happen exactly? poor documentation that just says "coming soon" doesn't help me at all.

    Apr 14, 2016 05:35
    1. Bhakti Paranjpe

      Hello Colin,

      We regret the inconvenience. We are expediting the process to make these topics available as soon as possible.

      Thanks,
      Bhakti

      Apr 15, 2016 09:53