Normalized compute hours overview


Normalized compute hours are the approximate hours of compute time that an instance has used. This value is an indicator of how much compute resource is being used in your infrastructure. You can use this value along with the number of instances value to estimate the cost of your public cloud or on-premises infrastructure. Also, you can determine the compute time that is used for the cost that you are paying for your infrastructure.  

Only compute-related services and resources can have the normalized compute hours value. (For example, cloud services such as Amazon EC2)

For more information, see the following sections:



Normalized compute hours calculation

A cloud provider's environment has different types of instances. The compute hours between instances can be compared based on a normalizing instance that is predefined for the cloud providers. 

The normalized compute hours value is based on the normalization factor of this predefined instance type. Normalization factor is a number that is assigned to an instance type and is used to compare the relative values of the instance types. The larger the instance size, the higher is the normalization factor, and vice versa. Every existing instance type has a normalization factor.

Normalized compute hours for an instance = (Hours of compute time) x (normalization factor of the associated instance type)

If the normalization factor for m1.small = 1, then, the normalized compute hours value for 2 hours of compute time of:

One instance of m1.small = 1 x 2 x 1 = 2
10 m1.small instances = 10 x 2 x 1 = 20


The cost of each hour of computation for an instance of a bigger instance type will be more than that for an instance of a smaller instance type. For example, the same number of instances of m1.large instance type will cost you more than m1.small instance type. 

Normalization factors

The normalization factor calculation for an instance type is based on the size of a standard instance type. The configuration of the standard instance type is the same for both the public cloud providers and on-premises infrastructure.

Standard instance type (refvm) configuration:

refvm_cpu = 1
refvm_mem = 2 GB
Normalization factor = max(vCPU/refvm_cpu, vMem/refvm_mem) = max(vCPU/1, vMem/2)

The following tables provide the normalization factors per existing instance type per provider:

AWS instances

Instance Type

vCPU

Memory (GB)

Normalization factor

General Purpose - Current Generation

T2

t2.nano

1

0.5

1

t2.micro

1

1

1

t2.small

1

2

1

t2.medium

2

4

2

t2.large

2

8

4

t2.xlarge

4

16

8

t2.2xlarge

8

32

16

M4 

m4.large

2

8

4

m4.xlarge

4

16

8

m4.2xlarge

8

32

16

m4.4xlarge

16

64

32

m4.10xlarge

40

160

80

m4.16xlarge

64

256

128

M3

m3.medium

1

3.75

1.88

m3.large

2

7.5

3.75

m3.xlarge

4

15

7.5

m3.2xlarge

8

30

15

Compute Optimized - Current Generation

C4

c4.large

2

3.75

2

c4.xlarge

4

7.5

4

c4.2xlarge

8

15

8

c4.4xlarge

16

30

16

c4.8xlarge

36

60

36

C3

c3.large

2

3.75

2

c3.xlarge

4

7.5

4

c3.2xlarge

8

15

8

c3.4xlarge

16

30

16

c3.8xlarge

32

60

32

Memory Optimized - Current Generation

X1

x1.16xlarge

64

976

488

x1.32xlarge

128

1952

976

R4

r4.large

2

15.25

7.63

r4.xlarge

4

30.5

15.25

r4.2xlarge

8

61

30.5

r4.4xlarge

16

122

61

r4.8xlarge

32

244

122

r4.16xlarge

64

488

244

R3

r3.large

2

15.25 

7.63

r3.xlarge

4

30.5

15.25

r3.2xlarge

8

61

30.5

r3.4xlarge

16

122

61

r3.8xlarge

32

244

122

Accelerated Computing Instances - Current Generation

P2

p2.xlarge

4

61

30.5

p2.8xlarge

32

488

244

p2.16xlarge

64

732

366

G2

g2.2xlarge

8

15

8

g2.8xlarge

32

60

32

F1

f1.2xlarge

8

122

61

f1.16xlarge

64

976

488

Storage Optimized - Current Generation

D2 - Dense Storage Instances

d2.xlarge

4

30.5

15.25

d2.2xlarge

8

61

30.5

d2.4xlarge

16

122

61

d2.8xlarge

32

244

122

I3 - High I/O Instances

i3.large

2

15.25

7.63

i3.xlarge

4

30.5

15.25

i3.2xlarge

8

61

30.5

i3.4xlarge

16

122

61

i3.8xlarge

32

244

122

i3.16xlarge

64

488

244

Cluster Networking - Current Generation

I2

i2.xlarge

4

30.5

15.25

i2.2xlarge

8

61

30.5

i2.4xlarge

16

122

61

i2.8xlarge

32

244

122

Previous Generation

t1.micro

1

0.6

m1.small

1

1.7

1

m1.medium

1

3.7

1.85

m1.large

2

7.5

3.75

m1.xlarge

4

15

7.5

m2.xlarge

2

17.1

8.55

m2.2xlarge

4

34.2

17.1

m2.4xlarge

8

68.4

34.2

c1.medium

2

1.7

2

c1.xlarge

8

7

8

cc2.8xlarge

32

60.5

32

cg1.4xlarge

16

22.5

16

cr1.8xlarge

32

224

112

hi1.4xlarge

16

60.5

30.25

hs1.8xlarge

16

117

58.5

 

Azure instances

Instance Type

vCPU

Memory GB

Normalization factor

Basic Instances  

A0

1

0.75

1

A1

1

1.75

1

A2

2

3.5

2

A3

4

7

4

A4

8

14

8

Standard Instances  

A1 v2

1

2

1

A2 v2

2

4

2

A4 v2

4

8

4

A8 v2

8

16

8

A2m v2

2

16

8

A4m v2

4

32

16

A8m v2

8

64

32

H8m

8

112

56

H16mr

16

224

112

H16r

16

112

56

H16m

16

224

112

H16

16

112

56

H8

8

56

28

A0

1

0.75

1

A1

1

1.75

1

A2

2

3.5

2

A3

4

7

4

A4

8

14

8

A10

8

56

28

A11

16

112

56

A5

2

14

7

A6

4

28

14

A7

8

56

28

A8

8

56

28

A9

16

112

56

D1

1

3.5

1.75

D11

2

14

7

D12

4

28

14

D13

8

56

28

D14

16

112

56

D2

2

7

3.5

D3

4

14

7

D4

8

28

14

D11 v2

2

14

7

D12 V2

4

28

14

D13 V2

8

56

28

D14 v2

16

112

56

D15 v2

20

140

70

D1 V2

1

3.5

1.75

D2 V2

2

7

3.5

D3 V2

4

14

7

D4 V2

8

28

14

D5 V2

16

56

28

F1

1

2

1

F16

16

32

16

F2

2

4

2

F4

4

8

4

F8

8

16

8

NC12

12

112

56

NC24

24

224

112

NC24r

24

224

112

NC6

6

56

28

NV12

12

112

56

NV24

24

224

112

NV6

6

56

28

Additional VMs for Windows OS in East US region (Standard Tier)

G1

2

28

14

G2

4

56

28

G3

8

112

56

G4

16

224

112

G5

32

448

224

L16

16

128

64

L32

32

256

128

L4

4

32

16

L8

8

64

32

Google Cloud Platform instances

Information

For a custom instance type (or machine type) in Google Cloud Platform, the normalization factor cannot be predefined. It is calculated dynamically (during runtime) by using the standard instance type (refvm) configuration.

Instance Type

vCPU

Memory (GB)

Normalization Factor

Shared Core - Bursting

f1-micro

1

0.6

1

g1-small

1

1.7

1

Standard

n1-standard-1

1

3.75

1.88

n1-standard-2

2

7.5

3.75

n1-standard-4

4

15

7.5

n1-standard-8

8

30

15

n1-standard-16

16

60

30

n1-standard-32

32

120

60

n1-standard-64

64

240

120

High Memory

n1-highmem-2

2

13

65

n1-highmem-4

4

26

13

n1-highmem-8

8

52

26

n1-highmem-16

16

104

52

n1-highmem-32

32

208

104

n1-highmem-64

64

416

208

High CPU

n1-highcpu-2

2

1.8

2

n1-highcpu-4

4

3.6

4

n1-highcpu-8

8

7.2

8

n1-highcpu-16

16

14.4

16

n1-highcpu-32

32

28.8

32

n1-highcpu-64

64

57.6

64

On-premises instances

The normalization factor calculation for an on-premises virtual machine (VM) or instance is based on the configuration of the VM.