Storage Management module storage container workflows

You can use the workflows contained in the Storage Container folder to manage storage containers.

Note

The storage concept storage container is mapped to storage group in EMC.



This section describes the following Storage Management module storage container workflows, their inputs, and their outputs:

Note

If you export this topic to PDF or Excel, the code samples in the exported file will be truncated on the right. To view or copy the full-width code samples, you must use a browser or export this topic to Word.

Add Host to Storage Container workflow description, inputs, and outputs

This workflow adds a host to a storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.

Add Host to Storage Container workflow inputs



Sample Add Host to Storage Container workflow storage input XML

<storage>
  		<storage-system-name>Storage system1</storage-system-name>
  		<storage-group-name>Storage grp1</storage-group-name>
  		<wwn>10000000C96EB8DA</wwn>
  		<disconnect-host-from-other-group>no</disconnect-host-from-other-group>
</storage>

Back to top

Add LUN to Storage Container workflow description, inputs, and outputs

This workflow adds a LUN to a storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Add LUN to Storage Container workflow inputs



Sample Add LUN to Storage Container workflow storage input XML

<storage>
  <storage-system-name>Storage system1</storage-system-name>
  <storage-group-name>Storage grp1</storage-group-name>
  <lun-name>Lun02</lun-name>
</storage>

Back to top

Create Storage Container from Template workflow description, inputs, and outputs

This workflow creates a storage container from a template.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.

Create Storage Container from Template workflow inputs



Sample Create Storage Container from Template workflow storage XML

<storage>
 <storage-container-contact>test@globallogic.com</storage-container-contact>
 <storage-container-description>test desc</storage-container-description>
 <storage-container-name>testds1</storage-container-name>
 <assume-confirmation>true</assume-confirmation>
 <storage-container-metadata>
  	<dfm-metadata-field>
    <field-name>test field</field-name>
    <field-value>test value</field-value>
  </dfm-metadata-field>
 </storage-container-metadata>
 <storage-container-owner>test</storage-container-owner>
 <requires-non-disruptive-restore>false</requires-non-disruptive-restore>
 <is-suspended>false</is-suspended>
 <is-application-data>true</is-application-data>
 <application-name>Name of the application</application-name>
 <application-server-name>Name of the server</application-server-name>
 <application-version>2.1</application-version>
 <is-application-responsible-for-primary-backup>true</is-application-responsible-for-primary-backup>
 <is-application-managing-primary-backup-retention>true</is-application-managing-primary-backup-retention>
 <carry-primary-backup-retention>true</carry-primary-backup-retention>
 <volume-qtree-name-prefix>bmc</volume-qtree-name-prefix>
 <group-name-or-id>new group</group-name-or-id>
 <online-migration>true</online-migration>
 <storage-service-name-or-id>Gold</storage-service-name-or-id>
 <storage-set-details>
  <storage-set-info>
   <dataset-access-details>
      <ip-address>172.16.49.151</ip-address>
      <netmask>255.0.0.0</netmask>
   </dataset-access-details>

   <dp-node-name>Primary data</dp-node-name>
   <server-name-or-id>test filer</server-name-or-id>
   <timezone-name>GMT</timezone-name>
  </storage-set-info>
  <storage-set-info>
    <dataset-access-details>
      <ip-address>172.16.49.151</ip-address>
      <netmask>255.0.0.0</netmask>
    </dataset-access-details>
    <dp-node-name>Primary data</dp-node-name>
    <server-name-or-id>test filer</server-name-or-id>
    <timezone-name>GMT</timezone-name>
   </storage-set-info>
 </storage-set-details>
</storage>

Back to top

Create Storage Container workflow description, inputs, and outputs

This workflow creates a storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Create Storage Container workflow inputs



Sample Create Storage Container workflow NetApp storage input XML

<storage>
 <storage-container-contact>test@globallogic.com</storage-container-contact>
 <storage-container-description>test desc</storage-container-description>
 <storage-container-name>testds1</storage-container-name>
 <storage-container-metadata>
 <dfm-metadata-field>
 <field-name>test field</field-name>
 <field-value>test value</field-value>
 </dfm-metadata-field>
 </storage-container-metadata>
 <storage-container-access-details>
 <ip-address>172.16.49.160</ip-address>
 <netmask>255.255.255.0</netmask>
 </storage-container-access-details>
 <storage-container-owner>test</storage-container-owner>
 <protection-policy-name-or-id>Back up</protection-policy-name-or-id>
 <provisioning-policy-name-or-id>policy_sdp</provisioning-policy-name-or-id>
 <requires-non-disruptive-restore>false</requires-non-disruptive-restore>
 <is-suspended>false</is-suspended>
 <timezone-name>GMT</timezone-name>
 <volume-qtree-name-prefix>bmc</volume-qtree-name-prefix>
 <is-application-data>true</is-application-data>
 <application-name>Name of the application</application-name>
 <application-server-name>Name of the server</application-server-name>
 <application-version>2.1</application-version>
 <is-application-responsible-for-primary-backup>true</is-application-responsible-for-primary-backup>	
     <is-application-managing-primary-backup-retention>true</is-application-managing-primary-backup-retention>
     <carry-primary-backup-retention>true</carry-primary-backup-retention>
</storage>



Sample Create Storage Container workflow EMC storage input XML

 	<storage>
  		<timeout-secs>100</timeout-secs>
  		<storage-container-name>Storage Group 1</storage-container-name>
	</storage>

Back to top

Delete Member of Storage Container workflow description, inputs, and outputs

This workflow deletes a member of a specified storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.

Back to top

Delete Storage Container workflow description, inputs, and outputs

This workflow deletes a storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.

Delete Storage Container workflow inputs



Sample Delete Storage Container workflow EMC storage input XML

<storage>
	  <storage-system-name>CLARiiON+CKM00083900053</storage-system-name>
	  <storage-group-name>S Group 4</storage-group-name>
</storage>

Back to top

Remove Host from Storage Container workflow description, inputs, and outputs

This workflow removes a host from a given storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Sample Remove Host from Storage Container workflow storage input XML

<storage>
  <storage-system-name>Storage system1</storage-system-name>
  <storage-group-name>Storage grp1</storage-group-name>
  <wwn>hostName</wwn>
</storage>

Back to top

Remove LUN from Storage Container workflow description, inputs, and outputs

This workflow removes a LUN from a specified storage container.

In EMC Symmetric environments, if a LUN is the last LUN in a storage container, it cannot be removed.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Sample Remove LUN from Storage Container workflow storage input XML

<storage>
  <storage-system-name>Storage system1</storage-system-name>
  <storage-group-name>Storage grp1</storage-group-name>
  <lun-name>Lun02</lun-name>
</storage>

Back to top

Update NAS Node workflow description, inputs, and outputs

This workflow updates a NAS node.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Sample Update NAS Node workflow storage input XML

<storage>
 <storage-container-name-or-id>test1</storage-container-name-or-id>
 <force>true</force>
 <dp-node-name>Primary data</dp-node-name>
 <online-migration>true</online-migration>
 <resourcepool-name-or-id>Gold</resourcepool-name-or-id>
 <provisioning-policy-name-or-id>Gold</provisioning-policy-name-or-id>
 <relinquish-vfiler>true</relinquish-vfiler>
 <vfiler-name-or-id>test</vfiler-name-or-id>
 <dataset-access-details>
  <ip-address>172.16.49.104</ip-address>
  <netmask>255.255.255.0</netmask>
 </dataset-access-details>
 <dataset-export-info>
  <dataset-export-protocol>nfs</dataset-export-protocol>
  <dataset-nfs-export-setting>
    <nfs-protocol-version>v3</nfs-protocol-version>
    <anonymous-access-user>65534</anonymous-access-user>
    <disable-setuid>true</disable-setuid>
    <disable-setuid>true</disable-setuid>
    <is-readonly-for-all-hosts>false</is-readonly-for-all-hosts>		  
    <is-readwrite-for-all-hosts>true</is-readwrite-for-all-hosts>		  
    <nfs-security-flavors>
      <nfs-security-flavor>krb5</nfs-security-flavor>
      <nfs-security-flavor>krb5p</nfs-security-flavor>
      <nfs-security-flavor>sys</nfs-security-flavor>
    </nfs-security-flavors>
    <nfs-export-ro-hosts>
     <nfs-export-host>
      <is-an-exception>false</is-an-exception>
      <hostname>172.16.49.104</hostname>
    </nfs-export-host>
   </nfs-export-ro-hosts>
   <nfs-export-root-hosts>
    <nfs-export-host>
      <is-an-exception>false</is-an-exception>
      <hostname>172.16.49.104</hostname>
    </nfs-export-host>
    <nfs-export-root-hosts>
    <nfs-export-rw-hosts>
      <nfs-export-host>
       <is-an-exception>false</is-an-exception>
       <hostname>172.16.49.104</hostname>
      </nfs-export-host>
    </nfs-export-rw-hosts>
   </dataset-nfs-export-setting>
  </dataset-export-info>
  <timezone-name>GMT</timezone-name>
</storage>

Sample Items XML for CIFS use case:

<storage>
  <storage-container-name-or-id>test1</storage-container-name-or-id>
  <force>true</force>
  <dp-node-name>Primary data</dp-node-name>
  <online-migration>true</online-migration>
  <resourcepool-name-or-id>Gold</resourcepool-name-or-id>
  <provisioning-policy-name-or-id>Gold</provisioning-policy-name-or-id>
  <relinquish-vfiler>true</relinquish-vfiler>
  <vfiler-name-or-id>test</vfiler-name-or-id>
  <dataset-access-details>
    <ip-address>172.16.49.104</ip-address>
    <netmask>255.255.255.0</netmask>
  </dataset-access-details>
  <dataset-export-info>
    <dataset-export-protocol>cifs</dataset-export-protocol>
    <dataset-cifs-export-setting>
      <cifs-domain>synapse.com</cifs-domain>
      <dataset-cifs-share-permissions>
      <dataset-cifs-share-permission>
       <cifs-username>Everyone</cifs-username>
       <permission>full_control</permission>
      </dataset-cifs-share-permission>
      <dataset-cifs-share-permission>
       <cifs-username>test</cifs-username>
       <permission>read</permission>
      </dataset-cifs-share-permission>
    </dataset-cifs-share-permissions>
    </dataset-cifs-export-setting>
  </dataset-export-info>
  <timezone-name>GMT</timezone-name>
</storage>

Back to top

Update SAN Node workflow description, inputs, and outputs

This workflow updates a SAN node.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Sample Update SAN Node workflow storage input XML

<storage>
  <storage-container-name-or-id>test1</storage-container-name-or-id>
  <force>true</force>
  <dp-node-name>Primary data</dp-node-name>
  <online-migration>true</online-migration>
  <resourcepool-name-or-id>Gold</resourcepool-name-or-id>
  <provisioning-policy-name-or-id>Gold</provisioning-policy-name-or-id>
  <relinquish-vfiler>true</relinquish-vfiler>
  <vfiler-name-or-id>test</vfiler-name-or-id>
  <dataset-access-details>
    <ip-address>172.16.49.104</ip-address>
    <netmask>255.255.255.0</netmask>
  </dataset-access-details>
  <dataset-export-info>
    <dataset-export-protocol>fcp</dataset-export-protocol>
    <dataset-lun-mapping-info>
      <igroup-os-type>windows</igroup-os-type>
      <lun-mapping-initiators>
        <lun-mapping-initiator>
         <hostname>172.16.49.104</hostname>
         <initiator-id>aaaaaaaaaaaaaaaa</initiator-id>
        </lun-mapping-initiator>
      </lun-mapping-initiators>
    </dataset-lun-mapping-info>
  </dataset-export-info>
  <timezone-name>GMT</timezone-name>
</storage>

Back to top

Update Storage Container Node workflow description, inputs, and outputs

This workflow updates a storage container node.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.



Sample Update Storage Container Node workflow storage input XML

<storage>
  <storage-container-name-or-id>test1</storage-container-name-or-id>
  <force>true</force>
  <dp-node-name>Primary data</dp-node-name>
  <online-migration>true</online-migration>
  <resourcepool-name-or-id>Gold</resourcepool-name-or-id>
  <provisioning-policy-name-or-id>Gold</provisioning-policy-name-or-id>
  <relinquish-vfiler>false</relinquish-vfiler>
  <vfiler-name-or-id>test</vfiler-name-or-id>
  <storage-container-access-details>
   <ip-address>172.16.49.104</ip-address>
   <netmask>255.255.255.0</netmask>
  </storage-container-access-details>
  <dataset-export-info>
   <dataset-export-protocol>nfs</dataset-export-protocol>
   <dataset-nfs-export-setting>
   <nfs-protocol-version>v3</nfs-protocol-version>
   <anonymous-access-user>65534</anonymous-access-user>
   <disable-setuid>true</disable-setuid>
   <disable-setuid>true</disable-setuid>
   <is-readonly-for-all-hosts>false</is-readonly-for-all-hosts>		  
   <is-readwrite-for-all-hosts>true</is-readwrite-for-all-hosts>		  
   <nfs-security-flavors>
    <nfs-security-flavor>krb5</nfs-security-flavor>
    <nfs-security-flavor>krb5p</nfs-security-flavor>
    <nfs-security-flavor>sys</nfs-security-flavor>
   </nfs-security-flavors>
   <nfs-export-root-hosts>
    <nfs-export-host>
     <is-an-exception>false</is-an-exception>
     <hostname>172.16.49.104</hostname>
    </nfs-export-host>
   </nfs-export-root-hosts>
   <nfs-export-rw-hosts>
    <nfs-export-host>
     <is-an-exception>false</is-an-exception>
     <hostname>172.16.49.104</hostname>
    </nfs-export-host>
    </nfs-export-rw-hosts>
   </dataset-nfs-export-setting>
  </dataset-export-info>
  <timezone-name>GMT</timezone-name>
</storage>

Back to top

Update Storage Container workflow description, inputs, and outputs

This workflow updates a storage container.

This workflow supports dynamic targeting. See [baom201103:About dynamic target support in OA Storage Management] for details about dynamic targeting.

Update Storage Container workflow inputs



Sample Update Storage Container workflow storage input XML

<storage>
  <storage-container-contact>test@bmc.com</storage-container-contact>
  <storage-container-description>test desc</storage-container-description>
  <storage-container-name-or-id>testds1</storage-container-name-or-id>
  <storage-container-metadata>
   <dfm-metadata-field>
    <field-name>test field</field-name>
    <field-value>test value</field-value>
   </dfm-metadata-field>
  </storage-container-metadata>
  <storage-container-owner>test</storage-container-owner>
  <protection-policy-name-or-id>Back up</protection-policy-name-or-id>
  <requires-non-disruptive-restore>false</requires-non-disruptive-restore>
  <is-suspended>false</is-suspended>
  <is-dp-ignored>true</is-dp-ignored>
  <check-protection-policy-on-commit>true</check-protection-policy-on-commit>
  <volume-qtree-name-prefix>newbmc</volume-qtree-name-prefix>
  <is-application-data>true</is-application-data>
  <application-name>Name of the application</application-name>
  <application-server-name>Name of the server</application-server-name>
  <application-version>2.1</application-version>
  <is-application-responsible-for-primary-backup>true</is-application-responsible-for-primary-backup>
  <is-application-managing-primary-backup-retention>true</is-application-managing-primary-backup-retention>
  <carry-primary-backup-retention>true</carry-primary-backup-retention>
</storage>

Back to top

Was this page helpful? Yes No Submitting... Thank you

Comments