Default language.

Transaction Based Sizing for Create Incident using REST API


Objective:

To determine the throughput per hardware for ITSM Rest API integration via Create Incidents.

Deployment Environment:

The environment for this benchmark was based on containerization. For virtual hardware or physical hardware, the results are similar or negligibly better. The Hardware specifications mentioned below are used to deploy containers required for the current Bench-marking. In case of virtual or physical hardware, the container controller can be ignored, and the hardware specification can be extracted from the “Pod Configuration” column. The logical deployment architecture of BMC Remedy ITSM Suite is as shown (the container layer is not shown):

image2019-10-31_11-37-7.png

Warning

Note

The above diagram is the standard architecture in our lab environment. For the REST API workload, all REST calls were serviced by the Platform-User AR Server. Related admin operations triggered on each incident creation were serviced by Platform-Admin; FTS indexing for newly created incidents was performed by Platform-FTS. The Mid-tiers in the above diagram were not invoked during the REST API workload.

Hardware Specifications:

The hardware specifications used in the diagrammed architecture are as follows:

Container Environment

Sno.

VM/Node

Pod Role

VM/Node Configuration

Pod Configuration (Limits)

JVM Heap

CPU (Cores)

Memory (GB)

CPU (Cores)

Memory (GB)

Xms (GB)

Xmx (GB)

1

VM01

Master Node

8

16

-

-

 

 

2

VM02

AR_Admin

8

20

4

16

12

12

3

VM03

AR_FTS

8

20

4

19

12

12

4

VM04

RSSO

4

8

2

3

2

2

5

VM05

MT Admin/FTS

4

8

1

2

1

1

6

VM06

MT User

6

10

4

8

6

6

7

VM07

AR_User

6

14

4

12

8

8

8

VM08

Grafana

8

12

-

-

 

 

9

DB01

Database (Physical box)

20

256

 

 

 

 

Methodology:

The throughput is measured for “Create Incident” use case using REST API “HPD:IncidentInterface_Create”.
REST API Template used is as below:

{
"values": {
"First_Name": "${customer}",
"Last_Name": "User",
"Description": "REST API: Incident Creation by Jmeter",
"Impact": "1-Extensive/Widespread",
"Urgency": "1-Critical",
"Status": "New",
"Reported Source": "Direct Input",
"Service_Type": "User Service Restoration",
"z1D_Action": "CREATE"
}

With the specified hardware simulation of various transaction rates for each workload is performed. During the test run monitor and measure the throughput, the average response time, and hardware resource utilization.

Workload details:

The following table shows the planned workload. Each thread was targeted to create about 1,000 transactions per hour. Each subsequent transaction is dependent on the completion of the previous transaction so that the targeted rate is approximate:

REST API Threads

Expected Transactions/hr

Expected Transactions/min

10

10000

170

20

20000

335

30

30000

500

40

40000

670

50

50000

835

55

55000

920

60

60000

1000

Note that for each Incident creation, related Admin Operations are triggered on the associated AR Admin Server. These operations are asynchronous and not measured as part of this benchmark. However, the resource usage of the AR Admin Server in terms of CPU/memory were measured.
For reference, the related asynchronous operations are:

  • AR System Email Message
  • NTE:Notifier
  • NTE:Notifier Log
  • NTE:SYS-NT Process Control
  • HPD:Help Desk Assignment Log
  • HPD:Help Desk
  • SLM:Measurement

Load automation:

The workload was performed by the automation tool Silk Performer to simulate multiple threads concurrently interacting with the system. Silk Performer simulates transactions by generating HTTP requests and processing HTTP responses. This mirrors real-life usage. Silk Performer was also used to measure server response times from the client (user) perspective. Silk Performer was deployed on a dedicated 20 CPU cores and 192 GB RAM physical machine connected with the F5 Load Balancer in the test environment setup. This eliminates any potential resource constraint on the tool while executing the workload.

Test Data Volume:

The following BMC products were used: BMC Remedy IT Service Management, BMC Service Request Management, and BMC Knowledge Management foundation data and application data. A volume of data was preloaded (generated) into the database to simulate a real-life deployment.

Table 1 summarizes the foundation data inserted into the BMC Remedy AR System database prior to starting the tests.

Table 1. BMC Remedy IT Service Management foundation data

 

Type

Foundation

Companies (multi-tenancy)

200

Sites

1,410

People

28,002

People Organizations/Dept

406

People Application Permission Groups

132,054

Support Organizations

18

Support Groups

106

Support Group Functional Role

48,006

Assignments

1,407

 

Table 2 summarizes the application data inserted into the BMC Remedy AR System database prior to starting the tests.

Table 2. BMC Remedy IT Service Management and BMC Knowledge Management application data

Type

Volume

Incident

502,949

Change

102,126

Problem

100,000

Service Target

965

CI+relationships in two datasets

9,981,281

Incidents with CI Associations

52,000

Service CI

5026

Knowledge Base Large Documents

10,000

Type

Volume

Knowledge Base Small Documents

80,000

Knowledge Base articles

20,000

 

Table 3 summarizes the foundation data for BMC Service Request Management.

Table 3. BMC Service Request Management foundation data

Type

Foundation

AOT

138

PDT

239

SRD

138

Navigational Categories

629

Service Requests

504,270

Entitlement Rules

94

Results:

Following tests were carried out. There were no errors during the test duration.

 

Expected

Actual

Response Times

Threads

Transactions/hr

Transactions/hr

Transactions/min

Transaction/sec

Avg (s)

90th % (s)

10

10,000

12,388

206

3

0.90

1.13

20

20,000

22,671

378

6.3

1.17

1.36

30

30,000

31,794

530

8.8

1.33

1.57

40

40,000

43,128

719

12.0

1.39

1.69

50

50,000

52,084

868

14.5

1.44

1.76

55

55,000

53,407

890

14.8

1.69

2.08

60

60,000

54,660

911

15.2

1.93

2.41

Threads

AR Admin Pod

AR User Pod

 

Avg CPU Utilization (%)

Memory Used (GB)

Avg CPU Utilization (%)

Memory Used (GB)

10

14.3%

13.0

14.4%

9.8

20

24.3%

17.1

25.2%

12.2

30

24.4%

17.5

34.7%

12.6

40

25.2%

17.5

46.4%

12.6

50

26.1%

17.1

56.2%

12.7

55

27.2%

17.2

67.6%

12.9

60

30.9%

17.6

73.2%

12.8

Analysis

A higher transaction rate with more worker threads can be observed but the AR CPU utilization went beyond the set threshold of 70%. This means the response time per transaction was less than optimal. Using the resource of 4vCPU/16GB, approximately 52,000 transactions per hour can be achieved with an average response time of about 1.44 seconds.

When the transaction rate was increased for the same CPU/RAM resource allocation, the actual number of transactions fell below the expected and the average transaction time increased.
This means, for the given CPU/RAM allocation, the 50 worker threads row in the above table is the optimal throughput with respect to transaction time. The result also indicated that a higher transaction rate is possible if slower response time is within tolerable limit.

Scalability Tests

For horizontal scale testing, 2 user pods were used with resource allocation of 4vCPU/16GB RAM. Correspondingly, the AR Admin pod resource allocation was also increased to 8vcpu/20GB RAM to service additional work.
The  workload was executed with 75 worker threads (or 75,000 transactions per hour) to validate the scalability of the horizontal scaling architecture.

Result

Ran load test with same workload using 75 worker threads, observed greater than 75,000 transactions per hour. 

The following 2 tables summarize the result and corresponding resource used:


 

Expected

Actual

Response Times

Worker

Threads

Transactions/hr

Transactions/hr

Transactions/min

Transaction/sec

Avg (s)

90th % (s)

75

75,000

77,158

1286

21.4

3.49

3.82

Threads

AR Admin Pod

AR User0 Pod

AR User1 Pod

 

Avg CPU

Utilization (%)

Memory Used

(GB)

Avg CPU

Utilization (%)

Memory Used

(GB)

Avg CPU

Utilization (%)

Memory Used

(GB)

75

54.9%

11.1

63.8%

9.4

61.2%

12.6


Index applied :

USE [<Database_Instance>]

GO
CREATE NONCLUSTERED INDEX [IDX_<Name>]
ON [dbo].[T2318] ([C300364200],[C490008000])
INCLUDE ([C179],[C490009000])
GO



USE [<Database_Instance>]
GO
CREATE NONCLUSTERED INDEX [IDX_<Name>]
ON [dbo].[T2173] ([C301494900],[C490008000],[C490009000])
GO


Note :

Schema T2318 is for SLM:Measurement

Schema T2173 is for SLM:SLAComplianceHistory

Configuration Settings:

For the entire benchmark the Fast Thread (queue 390620) was set to 16/20 threads in ar.conf.

Summary:

For pragmatic usage, summary from above tests is as following:
For the given 4 vcpu/12 GB user facing VM/node, the throughput is 50,000 transactions per hour at the rate of approximately 14.5 transactions per second, i.e., a throughput of 12,500 transactions per CPU core.
The throughput can be extended to ~53,400 transactions/hour at the rate of approximately 14.8 transactions per second but with a degrade of 17% in response time. Please note that the rate of approximately 53,400 transactions per hour represents the maximum throughput achieved on this given hardware as the expected throughput with 55 worker threads was 55,000 transactions per hour. The average CPU utilization on Platform-User AR Server of 72% also exceeded the typical threshold of 70%.
The REST API load was scalable and throughput obtained was nearly linear (up to the limited horizontal scaling done for this benchmark): For a given 2 user facing AR Servers with 4 vcpu/12GB, the throughput is 77,158 transactions per hour at rate of approximately 21 transactions per second.

Notes:
• The above results will vary depending on various other factors such as environment, hardware, specific REST API, other AR application customization and so on.

  • The Indexes applied are as per our testing environment where we have huge number of records under SLA.

• Platform-Admin AR Server needs to be sized correspondingly to handle the related transactions.




 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*

Remedy Deployment 20.02