Unsupported content

 

This version of the product has reached end of support. The documentation is available for your convenience. However, you must be logged in to access it. You will not be able to leave comments.

Create Dataset With Storage Service operation

The Create Dataset With Storage Service operation creates a dataset with the specified storage service.

The following table describes the elements for this request.


Adapter request elements for Create Dataset With Storage Service operation

Element

Definition

Required

<operation-name>

Specifies the operation name: create-dataset-with-storage-service

Yes

<arguments>

Specifies a list of arguments required for the operation

Yes

<targets>

Contains the parent XML element for the <target> element, which specifies the dynamic targets

Conditional; required if the adapter configuration is empty in Grid Manager

<target>

Contains the child XML <targets> element, which specifies the dynamic targets

Using dynamic targets, you can define connection information for a remote host in an adapter request. This capability enables you to configure an adapter in Grid Manager by specifying configuration information in an adapter request.

This XML element can have <host>, <user-name>, <password>, <protocol>, and <port> as its child elements.

Notes


  • An adapter configuration specified by using request-level dynamic targets takes precedence over Grid Manager level configuration information.
  • If you specify the request-level dynamic target by using <targets>, the request ignores the <targets> element that is a child of the <arguments> element.

Conditional; required if <targets> is present in the adapter request

<host>

Specifies the host name or IP address of the server on which NetApp DataFabric Manager is running

Conditional; required if <targets> is present in the adapter request

<user-name>

Specifies the user name required to log on to the NetApp DataFabric Manager

Conditional; required if <targets> is present in the adapter request

<password>

Specifies the password that corresponds to the <user-name>

The <password> element can contain an encryption-type attribute. The encryption-type attribute indicates whether the password specified is encrypted.

Valid values for encryption-type attribute: Base64, Plain (default)

Conditional; required if <targets> is present in the adapter request

<protocol>

Specifies the communication protocol used by the adapter

Valid values: http (default), https

No

<port>

Specifies the port on which NetApp DataFabric Manager is enabled

Default values: 8088 (http), 8488 (https)

No

<target>

Specifies the child XML element of the <arguments> element

You can use this element to specify the connection information for a DFM server. You can use a comma-separated list of configuration names, which executes the request simultaneously on all DFM servers identified by the configuration names.

Valid values:

  • The values specified for the <target> XML element must be the same as the values that have been specified for the name attribute of the <config> element in the Grid Manager adapter configuration.
  • You can provide "ALL" as the value. In this case, the request is executed on all the DFM servers defined in the Grid Manager adapter configuration.
  • You can specify multiple comma-separated names to allow for simultaneous execution of requests across multiple DFM servers.
  • You can skip this element. In this case, the request is executed on the first defined Grid Manager adapter configuration.
  • You can leave this element empty. In this case, the request is executed on the first defined Grid Manager adapter configuration.

    Note

    If you specify a request-level dynamic target by using <targets>, the request ignores this element.

No

<dataset-contact>

Specifies the contact details for the data set, such as the owner's email address

No

<dataset-description>

Specifies the description of the new data set, up to 255 characters

No

<dataset-metadata>

Specifies the opaque metadata for the data set

Metadata is usually set and interpreted by an application that is using the data set. DFM does not look into the contents of the metadata.

No

<field-name>

Specifies the name of the metadata field

Field names are up to 255 characters in length and are case- insensitive.

<field name> is required if <dataset-metadata> is present.

No

<field-value>

Specifies arbitrary, user-defined data expressed as a string

The string is opaque to the server and must not exceed 16384 (16 K) characters.

<field-value> is required if <dataset-metadata> is present.

No

<dataset-name>

Specifies the name of the new data set

Yes

<assume-confirmation>

Specifies the value that determines whether confirmation is given for all resolvable conformance actions that require user confirmation

If the value is TRUE, all conformance actions that require user confirmation are executed as if confirmation is already granted.

If the value is FALSE, all conformance actions that require user confirmation are not executed.

Valid values: true, false (default)

No

<dataset-owner>

Specifies the owner of the data set, up to 255 characters

No

<is-suspended>

Specifies whether the dataset is suspended for all automated actions

If TRUE, an administrator chooses to suspend this data set for all automated actions (data protection and conformance check of the data set).

Valid values: true, false (default)

No

<protection-policy-name-or-id>

Specifies the name or identifier of the protection policy to associate with this data set

The dataprotection license is required for this input.

No

<provisioning-policy-name-or-id>

Specifies the name or identifier of the provisioning policy to be associated with the primary node of the dataset

The members of the primary node are provisioned based on this policy. After the provisioning policy is associated with the data set node, the storage in the node is periodically monitored for conformance with the policy.

No

<requires-non-disruptive-restore>

Specifies whether the dataset is configured to enable non-disruptive restores from the backup destinations

Valid values: true, false (default)

No

<volume-qtree-name-prefix>

Specifies the prefix for volume and qtree names, up to 60 characters

The allowed characters are a to z, A to Z, 0 to 9, ' ' (space). (period) _ (underscore) - (hyphen)

If any other characters are included, the request gives an error.

No

<storage-service-name-or-id>

Specifies the name or object identifier of a storage service object

Yes

<online-migration>

Indicates that the migration cutover has to be non-disruptive.

By default, the migration is assumed to be disruptive. This applies only to the vFiler unit to be created or attached to the primary node of the dataset. If provided in input, either the <dataset-access-details> or <server-name-or-id> must be provided in the <storage-set-info> for the primary node.

Valid values: true, false (default)

No

<is-application-data>

Specifies whether the data set is an application data set managed by an external application

Since backups must be coordinated between the application and DFM, transfers move backups between nodes instead of creating new backups. Policies for which this behavior is not supported may not be assigned to application data sets.

You cannot convert a non-application data set to an application data set..

Valid values: true, false (default)

No

<application-info>

Contains information about the application that manages this dataset

This input is used only if the <is-application-data> is TRUE

No

<application-name>

Specifies the name of an application, up to 255 characters

For example: "SnapManager for Oracle"

Conditional; required if <is-application-data> is TRUE

<application-server-name>

Specifies the name of the server where the application is running, up to 255 characters

This is the name of the host server, rather than the name of the client application.

Conditional; required if <is-application-data> is TRUE

<application-version>

Specifies the version of an application, up to 255 characters

For example: "2.1"

Conditional; required if <is-application-data> is TRUE

<is-application-responsible-for-primary-backup>

Specifies whether the application is responsible for taking primary backups

The DFM creates primary backup versions only if this option is FALSE.

Valid values: true (default), false

Conditional; required if <is-application-data> is TRUE

<carry-primary-backup-retention>

Specifies whether the retention type of the primary backup is assigned to its replicas on the other nodes of the dataset

This input is used only if <is-application-data> is TRUE. If this input is TRUE, retention type of the primary backup is assigned to its replicas on the other nodes of the dataset.

An exception is made if the dp-backup-start API with retention-type specified, starts the replication. In that case, the retention type specified in dp-backup-start is assigned to the replicas thereby overriding the retention type of the primary backup.

If a scheduled event starts the replication, retention type specified in the schedule event is ignored. If this input is FALSE, retention type of the primary backup is ignored when assigning retention type to its replicas. Depending on how the backup transfer job was started, the retention type specified either in dp-backup-start API or the one specified in the schedule is assigned to the replica of the primary backup.

When this input is FALSE, replicas of the primary backup may get an undesired retention type if the schedules are not configured very carefully.

Recommendation

New users of the API must always set this input to TRUE.



Note that even if the retention type of primary and secondary backups is the same, the retention duration may be different for them. The retention duration is specified for each node in the data protection policy. See dp-policy-node-info for more details. Always present in the output.

Default value: TRUE

No

<is-application-managing-primary-backup-retention>

Indicates a boolean field

If this input is TRUE, Protection Manager does not enforce retention settings in the policy on the primary backups.

The application is responsible for deleting primary backups, possibly by invoking dp-backup-version-delete API. If this input is FALSE, Protection Manager deletes the primary backups according to the retention settings specified in the policy. Always present in the output.

Valid value: true, false (default)

No

<storage-set-details>

Specifies the configuration details for each storage set

The <storage-set-details> can contain multiple <storage-set-info> elements.

No

<storage-set-info>

Contains information about one storage set

No

<dataset-access-details>

Specifies the details of the vFiler unit to be created through which the dataset members provisioned for this node is exported

This element can be specified only for the primary node. Both server-name-or-id and dataset-access-details cannot be specified for a node.

No

<ip-address>

Specifies the IP address in dotted decimal format (for example, 192.111.11.11)

The length of this string cannot be more than 16 characters. This element is required if <dataset-access-details> is being specified.

No

<netmask>

Specifies the netmask for the IP Address in the dotted decimal notation

As an input, this is valid only when the ip-address is also provided in the dataset-access-details. Provisioning Manager creates a vFiler unit whose IP Address is configured as the ip-address and it is bound to an interface with the netmask.

The <netmask> is required if the <dataset-access-details> element is provided.

No

<dp-node-name>

Specifies the name of the node in the data protection policy

The <dp-node-name> element must exactly match the name of one of the nodes in the data protection policy that is currently assigned to the storage service. The <dp-node-name> element can be absent if no protection policy is assigned to the storage service, in which case, the details are assumed for the root storage set.

No

<server-name-or-id>

Specifies the name or identifier of the vFiler unit to be attached to the node

If a vFiler unit is attached, then all members provisioned in this node are exported over the vFiler unit.

You cannot specify both the <server-name-or-id> and <dataset-access-details> for a node.

No

<timezone-name>

Specifies the timezone to assign to the node

If specified, the value must be a timezone-name returned by <timezone-list-info-iter-next>. If no timezone is assigned, then the default system timezone is used.

No

A sample adapter request for this operation is given in the following figure.

Sample adapter request for Create Dataset With Storage Service operation

{<netapp-storage-request>
    <operation-name>create-dataset-with-storage-service</operation-name>
    <arguments>
      <targets>
        <target>
          <host>Server181</host>
          <user-name>username</user-name>
          <password encryption-type = "Base64">cGFzc3dvcmQ=</password>
          <protocol>http</protocol>
          <port>8088</port>
        </target>
      </targets>
      <target />
      <dataset-contact />
      <dataset-description />
      <dataset-name>Dataset1</dataset-name>
      <assume-confirmation>true</assume-confirmation>
      <dataset-metadata>
        <dfm-metadata-field>
          <field-name />
          <field-value />
        </dfm-metadata-field>
      </dataset-metadata>
      <dataset-owner />
      <requires-non-disruptive-restore>false</requires-non-disruptive-restore>
      <is-suspended>false</is-suspended>
      <is-application-data>false</is-application-data>
      <application-name />
      <application-server-name />
      <application-version />
      <is-application-responsible-for-primary-backup>false
</is-application-responsible-for-primary-backup>
      <is-application-managing-primary-backup-retention>false
</is-application-managing-primary-backup-retention>
      <carry-primary-backup-retention>false</carry-primary-backup-retention>
      <volume-qtree-name-prefix>QA-V</volume-qtree-name-prefix>
      <group-name-or-id />
      <storage-service-name-or-id>QA-SS-1</storage-service-name-or-id>
      <storage-set-details>
        <storage-set-info>
          <dp-node-name>Primary data</dp-node-name>
          <timezone-name />
        </storage-set-info>
      </storage-set-details>
    </arguments>
  </netapp-storage-request>


A sample adapter response for this operation is given in the following figure.

Sample adapter response for Create Dataset With Storage Service operation

<netapp-storage-response>
  <metadata>
    <status>success</status>
    <response-count>1</response-count>
  </metadata>
  <responses>
    <response>
      <metadata>
        <target>Server181</target>
        <status>success</status>
        <count>1</count>
      </metadata>
      <output>
        <dataset-id>255118</dataset-id>
      </output>
    </response>
  </responses>
</netapp-storage-response>
Was this page helpful? Yes No Submitting... Thank you

Comments