Configuring shared memory, resources, and NPIV
To use configure BMC Cloud Lifecycle Management to provision IBM AIX LPARs, the cloud administrator must complete the task described in this topic. This topic includes the following sections:
- Overview of the support
- To create the BMC Server Automation NSH scripts
- To update the providers.json file
- To create the NPIV storage compute pool in BMC Cloud Lifecycle Management
- To create and configure disks for IBM Logical Partition (LPAR)
- To use shared memory or shared processor pools
- To create an option to add disks to the LPAR
- To create an option to remove disks to the LPAR
Overview of the support
Cloud administrators can configure service offerings that:
- Use a shared memory or shared CPU pool
- Add or remove disks using NPIV
- Reuse WWPN adapter pairs
The following sections describe the configuration tasks in BMC Server Automation and BMC Cloud Lifecycle Management to enable the support.
To create the BMC Server Automation NSH scripts
BMC Cloud Lifecycle Management uses N_Port ID Virtualization (NPIV) to perform resource management tasks (such as storage management) in IBM LPAR environments. To use NPIV, you must first set up NSH scripts in BMC Server Automation that can be called by BMC Cloud Lifecycle Management. You would then include the scripts in the providers.json file.
The following table describes the scripts you can set up:
Task | Required? | What the script does |
---|---|---|
SAN and SAN switch configuration | Yes | For NPIV support, you must create a NSH script in BMC Server Automation that can fetch the World Wide Port Name (WWPN) number with the LPAR and perform all of the SAN and SAN switch configuration. |
Storage clean-up on SAN | No | NPIV disk reusing adapter can not be deleted. Optionally clean up script can be configured which will get called after NPIV disk removal to clean up the LUNs from storage side. Perform storage clean-up on SAN after remove disk or server decommission. This configuration is optional. If you do not create the script then disks are removed from BMC Cloud Lifecycle Management and no clean up script will be invoked. |
The following sections describe how to create the NSH scripts in BMC Server Automation.
To create the script to fetch the WWPN number
- In the BMC Server Automation console, right-click the CSM_Scripts depot folder and select New > NSH Script from the pop-up menu.
- Complete the fields on the Add NSH Script - Script Options panel. Name the script BBSA_NPIV_STORAGE_CONFIG_SCRIPT. For information on completing the other fields, see the BMC Server Automation online technical documentation for the Script Options panel.
Complete the fields on the Add NSH Script - Parameters panel. Use the following parameters in your NSH script:
Parameter name
NSH parameter
Parameter Value
Comments
SERVER??TARGET.NAME??
BMC Server Automation resolves this property with the value [LPAR Name] sent from BMC Cloud Lifecycle Management and executes the script on the LPAR.
LPAR Name (hostname)
$1
VF-LPAR-01
IP_ADDRESS$2
xx.xx.xx.xx
ADAPTER_IDs$3
1,2,3,4
Comma separated list of adapter IDs.
WWPNS$5
Comma separated will give the WWPN pairs for each adapter and further colon separator will give individual WWPN. All in sequence of adapters.
DISK_SIZES (MB)
$4
1000.0MB,1500.0MB,500.0MB,1200.0MB
Comma separated list of size for each adapter, in sequence.
For information on completing the other fields, see the BMC Server Automation online technical documentation for the Parameters panel.
- On the Add NSH Script - Properties panel, click Next.
- On the Add NSH Script - Permissions panel, click Finish to close the wizard and save your changes.
To configure a script to perform storage clean-up
New Access Attributes will be added at BSA level to configure a script to perform storage clean-up on SAN after remove disk or server decommission. script to be called to clean up the LUN for FC disk while peforming remove disk operation or server decommission. This configuration is optional and if missing then disks will be removed from CLM and no clean up script will be invoked.
- In the BMC Server Automation console, right-click the CSM_Scripts depot folder and select New > NSH Script from the pop-up menu.
- Complete the fields on the Add NSH Script - Script Options panel. Name the script BBSA_LPAR_NPIV_STORAGE_UNCONFIG_SCRIPT_NAME. For information on completing the other fields, see the BMC Server Automation online technical documentation for the Script Options panel.
Complete the fields on the Add NSH Script - Parameters panel. Use the following parameters in your NSH script:
Parameter name
NSH parameter
Parameter Value
Comments
LPAR Name (hostname)
$1
VF-LPAR-01
IP_ADDRESS$2
xx.xx.xx.xx
CLIENT LPAR ADAPTER_IDs$3
1,2,3,4
Comma separated list of client LPAR adaptor ids.
CLIENT LPAR FC ADAPTER WWPN
$5
Comma separated will give the client lpar FC adapter WWPN pairs for each adapter, and further colon separator will give individual WWPN. All in sequence of adapters.
DISK_SIZE (MB)
$4
1000.0MB,1500.0MB,500.0MB,1200.0MB
Comma separated list of size for each adapter, in sequence.
For information on completing the other fields, see the BMC Server Automation online technical documentation for the Parameters panel.
- On the Add NSH Script - Properties panel, click Next.
- On the Add NSH Script - Permissions panel, click Finish to close the wizard and save your changes.
To update the providers.json file
Update the providers.json file so that BMC Cloud Lifecycle Management calls the scripts you created in BMC Server Automation.
- Open the providers.json file on the computer running the Platform Manager (OSGi) server.
By default, you can find the providers.json file in the BMCInstallSoftware\BMCCloudLifeCycleManagement\Platform_Manager\configuration folder (Windows) or /opt/bmc/BMCCloudLifeCycleManagement/Platform_Manager/configuration (Linux). - Search for the BBSA_LPAR_NPIV_STORAGE_CONFIG_SCRIPT_NAME parameter.
BBSA_LPAR_NPIV_STORAGE_CONFIG_SCRIPT_NAME is the NSH script name that you must run as part of SAN and SAN switch configuration. Set attributeValue to the NSH script that configures the SAN and SAN switch.
For example, the AAV is set here to BBSA_NPIV_STORAGE_CONFIG_SCRIPT.}, {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttributeValue",
"accessAttribute" : {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttribute",
"datatype" : "STRING",
"guid" : "4fd5287f-3941-4f67-bc57-350344599036",
"isOptional" : true,
"isPassword" : false,
"modifiableWithoutRestart" : false,
"name" : "BBSA_LPAR_NPIV_STORAGE_CONFIG_SCRIPT_NAME"
},
"attributeValue" : "BBSA_NPIV_STORAGE_CONFIG_SCRIPT.nsh",
"guid" : "8b0b60d6-50a2-4458-8969-c8695823d736",
"name" : "BBSA_LPAR_NPIV_STORAGE_CONFIG_SCRIPT_NAME"
}, {- Optionally, search for the BBSA_LPAR_NPIV_STORAGE_UNCONFIG_SCRIPT_NAME parameter.
This is the NSH script name that you can run as part of the storage clean-up on SAN. Set attributeValue to the NSH script.
For example, the AAV is set here to BBSA_LPAR_NPIV_STORAGE_UNCONFIG_SCRIPT_NAME.}, {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttributeValue",
"accessAttribute" : {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttribute",
"datatype" : "STRING",
"guid" : "c1d89188-c919-4f48-acb9-268e8ae684ff",
"isOptional" : true,
"isPassword" : false,
"modifiableWithoutRestart" : false,
"name" : "BBSA_LPAR_NPIV_STORAGE_UNCONFIG_SCRIPT_NAME"
},
"attributeValue" : "cleanup_npiv_script",
"guid" : "50c5fac4-8ccb-432e-b7df-66722509576b",
"name" : "BBSA_LPAR_NPIV_STORAGE_UNCONFIG_SCRIPT_NAME"
}, {- Search for the BBSA_LPAR_AUTO_NETBOOT parameter.
BBSA_LPAR_AUTO_NETBOOT automatically netboots LPAR (if the value is set to true).
If the value is false, the netboot for LPAR is not automated; instead you must manually netboot LPAR. Set attributeValue to true (or false).
For example:}, {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttributeValue",
"accessAttribute" : {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttribute",
"datatype" : "BOOLEAN",
"guid" : "d42d9856-d309-4f79-9ab2-99f6825d39de",
"isOptional" : true,
"isPassword" : false,
"modifiableWithoutRestart" : false,
"name" : "BBSA_LPAR_AUTO_NETBOOT"
},
"attributeValue" : "true",
"guid" : "afdd419f-f3cf-4572-b0c2-91820c7831ce",
"name" : "BBSA_LPAR_AUTO_NETBOOT"
}, {- Save and close the file.
To create the NPIV storage compute pool in BMC Cloud Lifecycle Management
In BMC Cloud Lifecycle Management, create the NPIV storage compute pool and tag it accordingly by completing the following steps.
- From the BMC Cloud Lifecycle Management Administration console, click the vertical Workspaces menu on the left side of the window and select Resources.
- Under Quick Links on the left, click Compute Pools under the Compute section.
- Create the compute pool.
For more information, see:
To create and configure disks for IBM Logical Partition (LPAR)
Starting with BMC Cloud Lifecycle Management 4.1 patch 2 and later, you can create disks for IBM Logical Partition (LPAR) and configure the Disk Provisioning Type for the shared storage pool.
Before you begin
You must have applied the hot fix on BMC Server Automation 8.5.00 Service Pack1 Patch5 Hotfix1 (Build 8.5.01.304).
To avail the hotfix, contact BMC Support at http://www.bmc.com/support.- You must have installed BMC Cloud Lifecycle Management 4.1 Patch 2.
For existing IBM frames that are already onboarded in BMC Cloud Lifecycle Management, you must perform a VC sync. This persists the IBM shared storage pool resources into the Cloud DB. You can then select these shared storage pools into the virtual disk repository pools.
When you newly onboard an IBM frame into BMC Cloud Lifecycle Management, the IBM shared storage pool resources are automatically persisted into the Cloud DB.
Create a new compute resource pool for the Virtual Disk Repository resource type and from the list of available resources, select BMC_IBM_Cloud_SharedStoragePool.
After you add the shared storage pools into the virtual disk repository pools, you can perform the following activities:
- SOI provisioning
- DRO disks
- TRO disks (Can also be added to LPARs originally provisioned on Storage pool)
- Delete disks
The shared storage pool resources support thin and thick disk provisioning.
To configure Disk Provisioning Type for shared storage pool from BMC Cloud Lifecycle Management
By default, thin disk provisioning type is used for the disks created on shared storage pool. Thin provisioning allows allocating disks as per usage. Thick provisioning allows allocating certain number of disks irrespective of whether they are used or not.
To enable thick provisioning disks for shared storage pool from BMC Cloud Lifecycle Management, perform the following steps:
- On the Cloud Platform Manager server, stop the BMC CSM service.
- Navigate to the <Platform_Manager_install_location>\BMCCloudLifeCycleManagement\Platform_Manager\configuration folder and open the providers.json file in edit mode.
Locate the BBSA_LPAR_PROVISIONING_TYPE AccessAttributeValue for the BBSA provider and change the value for the property to thick.
The default value is thin.- Restart the BMC CSM service.
To use shared memory or shared processor pools
If you want to use a shared memory pool, you must edit the providers.json file for your environment.
- For shared CPU, BMC uses the default shared CPU pool.
- For shared memory, BMC selects the shared memory paging VIOS based on the AAVspecified in BMC Server Automation. The AAV BBSA_LPAR_PRIMARY_VIOS_SELECTION parameter controls the selection of the paging VIOS. Possible values are first and random.
- If the AAV is set to first, then the first paging VIOS is set by default. Cloud does not set the secondary paging VIOS.
- If the AAV is set to random, then one VIOS is selected at random and set as the primary VIOS.
- Open the providers.json file on the computer running the Platform Manager (OSGi) server.
By default, you can find the providers.json file in the BMCInstallSoftware\BMCCloudLifeCycleManagement\Platform_Manager\configuration folder (Windows) or /opt/bmc/BMCCloudLifeCycleManagement/Platform_Manager/configuration (Linux). - Search for the BBSA_LPAR_PRIMARY_VIOS_SELECTION parameter.
Set attributeValue to first or random.
For example, the following selection from the providers.json file sets the AAV to first:}, {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttributeValue",
"accessAttribute" : {
"cloudClass" : "com.bmc.cloud.model.beans.AccessAttribute",
"datatype" : "STRING",
"guid" : "590ecff8-0cb9-40ca-9a40-65d5736e0c8c",
"isOptional" : true,
"isPassword" : false,
"modifiableWithoutRestart" : false,
"name" : "BBSA_LPAR_PRIMARY_VIOS_SELECTION"
},
"attributeValue" : "first",
"guid" : "3d650043-958b-4fc6-88e9-40ed9760905c",
"name" : "BBSA_LPAR_PRIMARY_VIOS_SELECTION"
}, {- Save your changes.
To create an option to add disks to the LPAR
You can create a Request Definition (Delivery Requestable Offering or DRO, performed as part of the deployment) or a Post-Deploy Action (Transaction Requestable Offering or TRO, options that end-users can change on Day 2 of a deployment) to add a disk to the provisioned LPAR. The disks are added using NPIV.
To do so, see Configuring-end-user-Option-Choices-in-service-blueprints.
Note the following considerations when creating an Add Disk option for IBM AIX LPAR environments:
- Multi-path IO is supported for disks added using NPIV, however it is not supported for existing disks or for the disk that is provisioned on storage pool. For multi path IO, you must add two disks in a single TRO option. Add one disk with the actual disk size and add the other with a size of zero MB. The actual disk to be added must be tagged with the primary path, while the other disk must be tagged with the secondary path (for example, fcs0 for the Primary path and fc1 for the Secondary path, with 0MB as the size). For these two disks, the client/server adapter IDs and a pair of WWPNs are generated. The generated adapter IDs and WWPNs are then passed to the NPIV script. The script performs the LUN masking and storage side configuration from the SAN side.
- You can use the fibre channel adapter of existing disk for new disk Re-using WWPN is supported only for for adding a disk as a TRO option. If there is no matching fibre channel adapter, then a new fibre channel adapter is created. The match for the fibre channel adapter is done on the basis of the datastore. The disk to be added must have a datastore that matches that of the fibre channel adapter (attached to fcs0 or fcs1, the BMC Cloud Lifecycle Management datastore), which is picked up as the adapter to be reused. re-using To re-use the WWPN pair with the TRO Add Disk option, you must configure the BmcLparReuseClientFCAdapter deployment parameter, as described in Predefined-IBM-LPAR-parameters. Re-using WWPN is not supported with the DRO add disk option.
- Adding a NPIV disk creates a client/server adapter along with a pair of of WWPNs. LUN masking must be performed by the storage administrator manually or by having automation scripts in place for generated adapters and WWPN. The storage administrator can also configure a NPIV script to be called after the disk is added, to perform the LUN masking and other storage related configuration. The script can be called from the providers.json file for LUN masking.
- For the TRO Add Disk option choice, you can specify the disk to be multi-path and re-use WWPN.
A Day2 add disk (for rootvg or fibre channel) operation may fail with the following error:
No free Slots available.This failure is due to the default value for the maxvirtualslot parameter (a limit of 10 slots).This property is configurable, and can be changed by updating the RSCD_DIR/daal/Implementation/BMC_IBM_VirtualSystemEnabler_aix5/aix5/properties.props file on the LPAR proxy host. You can change this parameter to whatever is applicable for your environment, for example maxvirtualslot=50.
To create an option to remove disks to the LPAR
You can also create a DRO or TRO option that end-users can use to remove a disk from the provisioned LPAR, using NPIV.
To create the option, see Configuring-end-user-Option-Choices-in-service-blueprints.
An NPIV disk that re-uses an adapter can not be deleted. Optionally, you can create a clean up script that is called after NPIV disk removal to clean up the LUNs from the storage side. See To configure a script to perform storage clean-up.