Performance considerations
Simultaneous multithreading (SMT)
Working in multi-threading (MT) mode allows you to run multiple threads per zIIP, where thread is comparable to the definition of a CP core in a pre-multi-threading environment, resulting in increased zIIP processing capacity. To enable zIIP MT mode, define the PROCVIEW parameter of the LOADxx member of SYS1.IPLPARM in order to utilize the SMT function of z/OS. It defines a processor view of the core, which supports from 1 to n threads. Related parameters are MT_ZIIP_MODE and HIPERDISPATCH in IEAOPTxx. For more information, see the z/OS MVS Initialization and Tuning Guide topic in IBM Documentation.
WLM service class considerations
Note the following considerations for WLM service class:
- The agent utilizes zIIP engines. If the production workload also utilizes zIIP, associate the agent with a service class of a lower priority than the production workload service class, to avoid slowing down the production workload.
- When issuing CLI commands in a highly-constrained CPU environment, verify that the issuer - whether it is a TSO userid or a batch job - has at least the same priority as the agent.
zIIP-eligible work running on CP
Note the following considerations for zIIP-eligible work running on CP:
zIIP on CP reporting
Turning zIIP on CP monitoring provides information on zIIP-eligible work that was overflowed to CP. The monitoring is enabled by default only when zIIP processors are configured to the system. If no zIIP processors are configured and you would like to see how much CP would be saved by configuring zIIP processors in the system, you can set the PROJECTCPU parameter to YES in IEAOPTxx. This would enable monitoring and cause the zIIP on CP chart to be displayed in the agent screen. For more information, see the z/OS MVS Initialization and Tuning Guide topic in IBM Documentation.
Number of zIIP engines
BMC AMI Cloud performance scales linearly. The more zIIP engines available the higher the throughput.
System wide settings
The system-wide settings of whether to allow spill of zIIP-eligible work to CP is defined in the IIPHONORPRIORITY parameter of IEAOPTxx. The default is YES, allowing standard CPs to execute zIIP and non-zIIP-eligible work in priority order. For more information, see the z/OS MVS Initialization and Tuning Guide topic in IBM Documentation.
Individual service class settings
The honor priority parameter allows limiting individual work from overflowing to CP regardless of the system-wide settings. Using the parameter with a value of NO, will ensure zIIP eligible work does not overflow to CP.
In some cases (such as sub-capacity CPs, CP capping, etc) overflowing to CPs can result in degraded performance due to lacking CP resources. For more information, see the z/OS MVS Planning: Workload Management topic in IBM Documentation.
Improving TCP/IP CPU usage and Throughput
Note the following considerations to improve performance TCP/IP CPU usage and Throughput.
Reusing TCP/IP connections to cloud storage when using HTTPS
Connection reuse happens automatically for HTTP sessions (unencrypted sessions).
To enable connection reuse for HTTPS sessions (encrypted sessions), you must enable SSL trust between the agent and the object storage. For more information, see Enabling-trust-between-the-agent-and-the-object-storage.
Segmentation offloading
TCP/IP supports offloading the work of segmentation to the OSA Express card. This feature reduces CPU usage and increases network throughput. It can be enabled via the IPCONFIG SEGMENTATIONOFFLOAD on the TCP/IP profile.
Maximum and default send/receive buffer
TCP/IP send/receive buffers are used to improve general write/read throughput. This is especially important for public cloud or any far object storage with high latency.
Use 2M+ buffer sizes for your send/receive buffers.
TCPMAXRCVBUFRSIZE 2M
TCPMAXSENDBUFRSIZE 2M
TCPRCVBUFRSIZE 2M
TCPSENDBFRSIZE 2M
MTU maximum transmission unit size
Every TCP/IP frame is broken down into the MTU defined by the system. The z/OS default MTU value of 512 is very small and introduces unnecessary TCP/IP CPU overhead. The minimum value to be used as MTU when writing to object storage should be 1492.
Check with your network administrator whether jumbo frames can be utilized to further reduce the CPU overhead and improve throughput. Display the current MTU value using the commands:
Command | Description |
---|---|
TSO NETSTAT GATE | The Pkt Sz column represents the MTU size for each configured route. Verify the MTU size used by the route to the object storage. If no specific route to your object storage exists, the Default route value is used. This value should be equal or greater than 1492. |
TSO PING <object-storage-ip> (PMTU YES LENGTH 1400 | This command verifies whether the entire path from this TCP/IP stack to the object storage supports at least 1400 sized frames. If the output of this command includes Ping #1 needs fragmentation, contact your network administrator in order to resolve this issue. |