Sizing and scalability requirements for the Application Server

Use the sizing and scalability guidelines for the Application Server to determine the appropriate hardware capacity that is required for deploying Application Servers.

The Application Servers (AS) can be scaled horizontally and vertically. The major sizing drivers for the Application Servers are:

  • The required data processing throughput, which is determined by:
    • the number of entities (managed systems and business drivers)
    • the average number of metrics collected for each entity, and
    • the time granularity of the samples in the database
  • The overall number of users
  • The overall number of reports

The overall numbers of users and reports affect the workload required for presentation and scheduling activities, which consist of both online and batch requests. The required data processing throughput affects the communication services that the Application Server provides within TrueSight Capacity Optimization.

 For more information, refer to the following sections:

Sizing and scalability guidelines for a single AS computer

The minimum configuration for the Application Server is a single AS computer with the base configuration containing one of each type of component (Data Hub, Primary Scheduler, Web Application, Service Container), and an ordinary file system directory for the Repository Folder. The base configuration supports up to 100 users and 100 reports and processes up to 10 million samples per day.

You can configure the Application Server to process up to 20 million samples a day by doubling the CPU to four CPU cores at 2 GHz and doubling the memory to 16 GB of RAM. This configuration should also double the numbers of users and reports that can be supported. If you increase the memory, you should also increase the heap size for the AS components.

To scale beyond 20 million samples a day, you can continue to increase the hardware of the Application Server computer, but consider the following guidelines:

  • You need to distinguish between the different sizing drivers rather than simply scaling up to match the expected throughput. See Scaling out to multiple AS computers, to understand the sizing drivers and the AS components affected by them.
  • You need to make sure that the database computer is scaled appropriately. The IOPS capacity of the database computer disks is particularly likely to become a bottleneck.

Scaling out to multiple AS computers

To scale for different purposes, use the following techniques:

  • To grow the presentation layer for supporting more users, add AS computers with the Web Application component installed.
  • To grow the report scheduling layer for supporting more reports, add AS computers with the Primary Scheduler component installed.
  • To grow the data communication layer for supporting more data throughput, perform one or more of the following actions:
    • Increase the memory of the AS computer and increase the heap size of the Data Hub component. 
    • Add AS computers to move interactive user and reporting activities away from the AS computer running the Data Hub. On these computers, install Web Application and Primary Scheduler components.
    • Offload some of the work of the Data Hub to Service Containers. See Guidelines for relocating backend services to Service Containers.
  • Java heap size : The default heap size for the AS components is set for the base configuration. If you increase the memory, you should increase the heap sizes correspondingly. For more information, see Modifying the heap size of AS components. For the Data Hub component, see sizing guidelines for ETL Engine Servers.

For details on scaling out using additional AS computers, see Splitting AS components between multiple computers.

Note

If you deploy more than one Application Server to support the presentation layer, you need a load balancer.

If your deployment architecture includes remote ETL Engine servers, take into account the resources required by the data hub. See the sizing guidelines for ETL Engine Servers.

Splitting AS components between multiple computers

Note

If you have multiple application servers and you split the Primary Scheduler from the Web Application, you need to share the Repository folder. For details, see Sharing the Content Repository directory.

To scale out the capacity of the AS from the base configuration, it is possible to configure multiple computers as AS computers, and replicate some of the components. This section explains the options, constraints, and requirements. In any case, all AS components in a single installation must be able to communicate with the TrueSight Capacity Optimization database.

  • Web Application: This component can be replicated on more than one AS computer, one for each AS computer. All Web Application components in a single TrueSight Capacity Optimization installation must be able to share a Repository folder. All Web Application components in a single TrueSight Capacity Optimization installation must be behind a single load balancer that can support session persistence.
  • Primary Scheduler: This component can be replicated on more than one AS computer, one for each AS computer. All Primary Scheduler components in a single TrueSight Capacity Optimization installation must be able to share a Repository folder.
  • Data Hub: This component cannot be replicated; there must be exactly one Data Hub in a TrueSight Capacity Optimization installation. However, some of the services that the Data Hub runs can be relocated to run on a Service Container component instead.
  • Service Container: This component, automatically installed on every AS computer, is turned off by default. If it is turned on, some of the services that run on the Data Hub can be relocated to run on this component instead. See the section below, Guidelines for relocating backend services.
  • Repository Folder: This remote mount link is on each AS computer and points to a single file system directory shared through NFS. The TrueSight Capacity Optimization user on each AS computer must have permissions to read, write, and update this folder. For details, see Sharing the Content Repository directory.

Some commonly created configurations

Configuration

AS1 installed  components

AS2 installed components

ASn installed components

Base configuration: AS-ALL

AS-ALL:

Web Application, Primary Scheduler
Data Hub
Service Container (OFF)
Repository (local or on NAS/SAN)
 

Not applicable

Not applicable

 

Split configuration: AS-HUB+AS-WEB

AS-HUB:

Primary Scheduler
Data Hub
Service Container
Repository Folder (accessed via NFS or on NAS/SAN)

AS-WEB:

Web Application
(Primary) Scheduler
Service Container
Repository Folder (accessed via NFS or on NAS/SAN)

Not applicable

Split configuration: AS-HUB+n*(AS-WEB)

AS-HUB:

Primary Scheduler
Data Hub, Service Container
Repository Folder (accessed via NFS or on NAS/SAN)

AS-WEB:

Web Application
(Primary) Scheduler
Service Container
Repository Folder (accessed via NFS or on NAS/SAN)

AS-WEB:

Web Application
(Primary) Scheduler
Service Container
Repository Folder (accessed via NFS or on NAS/SAN)

The sizing considerations that might require a split such as AS-WEB and AS-HUB are:

  • The Web Application needs to be scaled out using active-active load balancing. Because you cannot run the Data Hub and Primary Scheduler on multiple computers, it is necessary to split the Web Application on its own.
  • The Data hub needs the maximum resources of the computer, so it is necessary to split the AS-WEB and AS-HUB apart.

When splitting components such as AS-WEB and AS-HUB, you must share the Content Repository between them. Sharing can be either via NFS mount, or via a file system on a shared disk. The TrueSight Capacity Optimization user on each computer must have permissions to read, write, and update files in the repository.

For information about how to configure the AS-WEB in active-active load balancing configuration, see High availability considerations.

Modifying heap sizes of AS components

Modifying the heap size of the Primary Scheduler

  1. In the installation directory on the AS computer, open the customenvpre.sh file for editing.
  2. In the #SCHEDULER section find the following statements:
    #SCHEDULER_HEAP_SIZE="1024m"
    #export SCHEDULER_HEAP_SIZE
  3. If these statements are listed, delete '#' that precede the statement to  uncomment them.
  4. Modify the 1024m to the new heap size.
  5. Restart the scheduler.

Modifying the heap size of the Web Application

  1. In the installation directory on the AS computer, open the customenvpre.sh file for editing.
  2. In the #WEB section find the following statements:
    #WEB_HEAP_SIZE="2048m"
    #export WEB_HEAP_SIZE
  3. If the statements are listed, delete '#' that precede the statement to  uncomment them.
  4. Modify the 2048m to the new heap size.
  5. Restart the web.

Guidelines for relocating backend services from the Data hub to Service Containers

Service Container is a component of TrueSight Capacity Optimization architecture that can run backend services. It provides you with the ability to isolate backend services to improve their resilience and workload balancing.

If and when you consider splitting the Data hub, you have another option (in more advanced cases) to split or relocate services from within the Data hub itself to Service Containers. Whether or not to relocate services is a decision to be made based on the expected workload for particular services. The most important of these services are the Data API service and the Capacity-Aware Placement Advice (CAPA) service. The following scenarios serve as approximate guidelines:

  • In TrueSight Capacity Optimization deployments where more than 4 million rows per day are being populated via Remote ETL Engines, the Data API service is expected to be heavily used. In this scenario, consider relocating the Data API service to a separate Service Container. Note that the Remote ETL Engine might need to be reconfigured to communicate with the Service Container instead of the Data hub, unless it is using the Apache port to communicate.
  • In TrueSight Capacity Optimization deployments that are being used both for CAPA and for other TrueSight Capacity Optimization use cases, both the CAPA service and other services are expected to place a large demand on the Data Hub. In this scenario, consider relocating the CAPA service to a separate Service Container. Note that the BCM Provider component will need to be reconfigured to communicate with the Service Container instead of the Data hub.
Was this page helpful? Yes No Submitting... Thank you

Comments