Transaction Based Sizing for Create Incident using REST API


Objective:

To determine the throughput per hardware for ITSM Rest API integration via Create Incidents.

Deployment Environment:

The environment for this benchmark was based on containerization. For virtual hardware or physical hardware, the results are similar or negligibly better. The Hardware specifications mentioned below are used to deploy containers required for the current Bench-marking. In case of virtual or physical hardware, the container controller can be ignored, and the hardware specification can be extracted from the “Pod Configuration” column. The logical deployment architecture of BMC Remedy ITSM Suite is as shown (the container layer is not shown):

image2019-10-31_11-37-7.png

Warning

Note

The above diagram is the standard architecture in our lab environment. For the REST API workload, all REST calls were serviced by the Platform-User AR Server. Related admin operations triggered on each incident creation were serviced by Platform-Admin; FTS indexing for newly created incidents was performed by Platform-FTS. The Mid-tiers in the above diagram were not invoked during the REST API workload.

Hardware Specifications:

The hardware specifications used in the diagrammed architecture are as follows:

Container Environment

Sno.

VM/Node

Pod Role

VM/Node Configuration

Pod Configuration (Limits)

JVM Heap

CPU (Cores)

Memory (GB)

CPU (Cores)

Memory (GB)

Xms (GB)

Xmx (GB)

1

VM01

Master Node

8

16

-

-

 

 

2

VM02

AR_Admin

8

20

4

16

12

12

3

VM03

AR_FTS

8

20

4

19

12

12

4

VM04

RSSO

4

8

2

3

2

2

5

VM05

MT Admin/FTS

4

8

1

2

1

1

6

VM06

MT User

6

10

4

8

6

6

7

VM07

AR_User

6

14

4

12

8

8

8

VM08

Grafana

8

12

-

-

 

 

9

DB01

Database (Physical box)

20

256

 

 

 

 

Methodology:

The throughput is measured for “Create Incident” use case using REST API “HPD:IncidentInterface_Create”.
REST API Template used is as below:

{
"values": {
"First_Name": "${customer}",
"Last_Name": "User",
"Description": "REST API: Incident Creation by Jmeter",
"Impact": "1-Extensive/Widespread",
"Urgency": "1-Critical",
"Status": "New",
"Reported Source": "Direct Input",
"Service_Type": "User Service Restoration",
"z1D_Action": "CREATE"
}

With the specified hardware simulation of various transaction rates for each workload is performed. During the test run monitor and measure the throughput, the average response time, and hardware resource utilization.

Workload details:

The following table shows the planned workload. Each thread was targeted to create about 1,000 transactions per hour. Each subsequent transaction is dependent on the completion of the previous transaction so that the targeted rate is approximate:

REST API Threads

Expected Transactions/hr

Expected Transactions/min

8

8000

140

10

10000

170

15

15000

250

18

18000

300

20

20000

333

Note that for each Incident creation, related Admin Operations are triggered on the associated AR Admin Server. These operations are asynchronous and not measured as part of this benchmark. However, the resource usage of the AR Admin Server in terms of CPU/memory were measured.
For reference, the related asynchronous operations are:

  • AR System Email Message
  • NTE:Notifier
  • NTE:Notifier Log
  • NTE:SYS-NT Process Control
  • HPD:Help Desk Assignment Log
  • HPD:Help Desk
  • SLM:Measurement

Load automation:

The workload was performed by the automation tool Silk Performer to simulate multiple threads concurrently interacting with the system. Silk Performer simulates transactions by generating HTTP requests and processing HTTP responses. This mirrors real-life usage. Silk Performer was also used to measure server response times from the client (user) perspective. Silk Performer was deployed on a dedicated 20 CPU cores and 192 GB RAM physical machine connected with the F5 Load Balancer in the test environment setup. This eliminates any potential resource constraint on the tool while executing the workload.

Test Data Volume:

The following BMC products were used: BMC Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management foundation data and application data. A volume of data was preloaded (generated) into the database to simulate a real-life deployment.

Table 1 summarizes the foundation data inserted into the BMC Remedy AR System database prior to starting the tests.

Table 1. BMC Remedy IT Service Management foundation data

 

Type

Foundation

Companies (multi-tenancy)

200

Sites

1,410

People

28,002

People Organizations/Dept

406

People Application Permission Groups

132,054

Support Organizations

18

Support Groups

106

Support Group Functional Role

48,006

Assignments

1,407

 

Table 2 summarizes the application data inserted into the BMC Remedy AR System database prior to starting the tests.

Table 2. BMC Remedy IT Service Management and BMC Knowledge Management application data

Type

Volume

Incident

502,949

Change

102,126

Problem

100,000

Service Target

965

CI+relationships in two datasets

9,981,281

Incidents with CI Associations

52,000

Service CI

5026

Knowledge Base Large Documents

10,000

Type

Volume

Knowledge Base Small Documents

80,000

Knowledge Base articles

20,000

 

Table 3 summarizes the foundation data for BMC Service Request Management.

Table 3. BMC Service Request Management foundation data

Type

Foundation

AOT

138

PDT

239

SRD

138

Navigational Categories

629

Service Requests

504,270

Entitlement Rules

94

Results:

Following tests were carried out. There were no errors during the test duration.

 

Expected

Actual

Response Times

Threads

Transactions/hr

Transactions/hr

Transactions/min

Transaction/sec

Avg (s)

90th % (s)

8

8,000

8,466

141

2

1.40

1.67

10

10,000

10,200

170

3

1.52

1.79

15

15,000

14,526

242

4

1.71

2.06

18

18,000

16,670

278

4.6

1.88

2.27

20

20,000

17,274

288

4.8

2.16

2.67

Threads

AR Admin Pod

AR User Pod

 

Avg CPU Utilization (%)

Memory Used (GB)

Avg CPU Utilization (%)

Memory Used (GB)

8

35%

11.4

33%

10.1

10

38%

15.4

41%

10.9

15

58%

12.2

65%

10.4

18

59%

12.5

76%

10.5

20

59%

12.6

78%

10.5

Analysis

A higher transaction rate with more worker threads can be observed but the AR CPU utilization went beyond the set threshold of 70%. This means the response time per transaction was less than optimal. Using the resource of 4vCPU/16GB, approximately 15,000 transactions per hour can be achieved with an average response time of about 1.71 seconds.

When the transaction rate was increased for the same CPU/RAM resource allocation, the actual number of transactions fell below the expected and the average transaction time increased.
This means, for the given CPU/RAM allocation, the 15 worker threads row in the above table is the optimal throughput with respect to transaction time. The result also indicated that a higher transaction rate is possible if slower response time is within tolerable limit.

Scalability Tests

For horizontal scale testing, 2 user pods were used with resource allocation of 4vCPU/16GB RAM. Correspondingly, the AR Admin pod resource allocation was also increased to 8vcpu/20GB RAM to service additional work.
The initial workload was executed with 36 worker threads (or 36,000 transactions per hour) to validate the scalability of the horizontal scaling architecture.

Initial Results:

The following 2 tables summarize the result and corresponding resource used.

 

Expected

Actual

Response Times

Worker

Threads

Transactions/hr

Transactions/hr

Transactions/min

Transaction/sec

Avg (s)

90th % (s)

36

36,000

23,276

388

6.47

3.56

4.34

Threads

AR Admin Pod

AR User0 Pod

AR User1 Pod

 

Avg CPU

Utilization (%)

Memory Used

(GB)

Avg CPU

Utilization (%)

Memory Used

(GB)

Avg CPU

Utilization (%)

Memory Used

(GB)

36

61%

11.1

65%

9.4

61%

12.6

Analysis

The actual throughput observed is less than the expected throughput. Also, the average response time nearly doubled when compared to the previous set of workloads.
However, the CPU usage did not cross the threshold. This means the application is not scaling horizontally.
On Analysis of the application stack it was found that the database was the bottleneck. Ultimately, SQL is analyzed and added the below Index on HelpDesk table and reran the workload.

USE [AR_Remedy_1902]
GO
CREATE NONCLUSTERED INDEX [IDX_RESTAPI]
ON [dbo].[T2318] ([C300364200],[C490008000])
INCLUDE ([C179],[C490009000])
GO

The archema for the index is SLM:Measurements.

The field details are:

FieldID

Field Name

179

InstanceID

300364200

MeasurementDone

490008000

SVTInstanceID

490009000

ApplicationInstanceID

Warning

Note

The BMC performance testing team added this index due to the large number of records in the table where the test was performed. Analyze the queries during the REST API testing and consult with your database administrator if any indexes are required.

This query cannot be created by using developer studio and is also not supported out of the box.

Final Result

Re-ran the load test with same workload (using 36 worker threads), observed greater than 30,000 transactions per hour; however, with average CPU usage on the Platform-User AR above 90%, which was the indicator that the bottleneck was removed.
Also,  adjusted the workload to the original horizontal scaling target of 30 worker threads.

The following 2 tables summarize the result and corresponding resource used:


 

Expected

Actual

Response Times

Worker

Threads

Transactions/hr

Transactions/hr

Transactions/min

Transaction/sec

Avg (s)

90th % (s)

36

30,000

29,205

487

8.1

3.56

4.34

Threads

AR Admin Pod

AR User0 Pod

AR User1 Pod

 

Avg CPU

Utilization (%)

Memory Used

(GB)

Avg CPU

Utilization (%)

Memory Used

(GB)

Avg CPU

Utilization (%)

Memory Used

(GB)

36

61%

11.1

65%

9.4

61%

12.6


Configuration Settings:

For the entire benchmark the Fast Thread (queue 390620) was set to 16/20 threads in ar.conf.

Summary:

For pragmatic usage, summary from above tests is as following:
For the given 4 vcpu/12 GB user facing VM/node, the throughput is 15,000 transactions per hour at the rate of approximately 4 transactions per second, i.e., a throughput of 3750 transactions per CPU core.
The throughput can be extended to ~16,500 transactions/hour at the rate of approximately 4.6 transactions per second but with a degrade of 13% in response time. Please note that the rate of approximately 16,500 transactions per hour represents the maximum throughput achieved on this given hardware as the expected throughput with 18 worker threads was 18,000 transactions per hour. The average CPU utilization on Platform-User AR Server of 76% also exceeded the typical threshold of 70%.
The REST API load was scalable and throughput obtained was nearly linear (up to the limited horizontal scaling done for this benchmark): For a given 2 user facing AR Servers with 4 vcpu/12GB, the throughput is 29,205 transactions per hour at rate of approximately 8 transactions per second.

Notes:
• The above results will vary depending on various other factors such as environment, hardware, specific REST API, other AR application customization and so on.
• Platform-Admin AR Server needs to be sized correspondingly to handle the related transactions.


 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

Remedy Deployment 19.08