OCUMClassicAPI - Contains the definitions and description of API Bindings for OnCommand Unified Manager server 5.2 or earlier
my $s = NaServer->new($server, 1, 0); # create NaServer (server context)
$s->set_admin_user('admin', 'password'); # provide username and password
$s->set_server_type('DFM'); # set the server type to DFM for OnCommand Unified Manager 5.2 or earlier
eval{
my $output = $s->dfm_about(); # use binding for dfm-about API
print "OCUM server version is: $output->{version}\n"; # extract the required parameter from output
};
if($@) { # check for any exception
my ($error_reason, $error_code) = $@ =~ /(.+)\s\((\d+)\)/; # parse out error reason and error code
print "Error Reason: $error_reason Code: $error_code\n"
}
NetApp Manageability SDK 5.3.1 provides support for Perl API bindings for both Data ONTAP APIs and OnCommand Unified Manager APIs.
The Perl API bindings libraries contain interfaces to establish a connection with either the Data ONTAP server or the OnCommand Unified Manager server.
By using these libraries, you can create Perl applications to access and manage the Data ONTAP server or OnCommand Unified Manager server.
NetApp Manageability SDK 5.3.1 Perl API bindings provide a runtime library NaServer.pm, which is available at <installation_folder>/lib/perl/NetApp.
This library file enables you to establish a server connection, send requests and receive responses, and interpret error messages.
Each binding can be called as a subroutine of NaServer module which in turn invokes the corresponding Data ONTAP or OnCommand Unified Manager API.
The aggregate-list-info-iter-* set of APIs are used to retrieve the list of aggregates. aggregate-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the aggregate-list-info-iter-next API for the particular tag is no longer necessary.Inputs
Outputs
For more documentation please check aggregate-list-info-iter-start. The aggregate-list-info-iter-next API is used to iterate over the members of the aggregates stored in the temporary store created by the aggregate-list-info-iter-start API.Inputs
- tag => string
Tag from a previous aggregate-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by aggregate-list-info-iter-start.
Outputs
The aggregate-list-info-iter-* set of APIs are used to retrieve the list of aggregates in DFM. The aggregate-list-info-iter-start API is used to load the list of aggregates into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the aggregates in the temporary store. If aggregate-list-info-iter-start is invoked twice, then two distinct temporary stores are created. If neither aggregate-name-or-id or aggr-group-name-or-id are provided, all aggregates will be listed. If either, but not both are provided, the aggregate or all aggregates in the group will be listed respectively. If aggregate-name-or-id and aggr-group-name-or-id are provided, the aggregate will be listed only if it is under the specified group.Inputs
- aggregate-type => string, optional
Filter by type of aggregate. Possible values are: - traditional
- aggregate
- striped
If no aggregate-type input is supplied, all types of aggregates will be listed.
- is-direct-member-only => boolean, optional
If true, only return the aggregates that are direct members of the specified resource group. Default value is false. This field is meaningful only if a resource group name or id is given for the object-name-or-id field.
- is-dp-ignored => boolean, optional
If true, only list aggregates that have been set to be ignored for purposes of data protection. If false, only list aggregates that have not been set to be ignored for purposes of data protection. If not specified, list all aggregates without taking into account whether they have been ignored or not.
- is-in-dataset => boolean, optional
If true, only list aggregates which only contain data which is protected by a dataset. If false, only list aggregates containing data which is not protected by a dataset. If not specified, list all aggregates whether they are in a dataset or not.
- is-unprotected => boolean, optional
If true, only list aggregates that are not protected, which means the aggregate is: - 1. not in any resource pool.
- 2. not a child of a host that is a member of any resource pool.
- and 3. not a member of a node in a dataset with protection policy assigned.
If false or not set, list all aggregates.
- object-name-or-id => string, optional
Name or identifier of an object to list aggregates for. The allowed object types for this argument are: - Resource Group
- Resource Pool
- Dataset
- Storage Set
- Host
- Aggregate
- Volume
- Qtree
- GenericAppObject
If object-name-or-id identifies an aggregate, that single aggregate will be returned. If object-name-or-id resolves to more than one aggregate, all of them will be returned. If no object-name-or-id is provided, all aggregates will be listed.
- rbac-operation => string, optional
Name of an RBAC operation. If specified, only return aggregates for which authenticated admin has the required capability. A capability is an operation/resource pair. The resource is the volume where the aggregate lives. The possible values that can be specified for operation can be obtained by calling rbac-operation-info-list. If operation is not specified, then it defaults to DFM.Database.Read. For more information about operations, capabilities and user roles, see the RBAC APIs.
- resourcepool-filter => string, optional
Possible Value: 'in_rpool', 'not_in_rpool', 'all'. If set to 'in_rpool', only list aggregates that are in a resource pool. If set to 'not_in_rpool', only list aggregates that are not in a resource pool. If set to 'all', then list all aggregates. If a value is not specified, then 'all' will be the default. An aggregate is said to be in a resource pool if either the aggregate or the storage system containing the aggregate is a member of a resource pool.
- volumes-to-migrate => object-name-or-id[], optional
This is a filter to return a list of candidate destination aggregates where the volumes in volumes-to-migrate can be migrated to. An ordered list of candidate aggregates are returned based on free space. Each object-name-or-id in volumes-to-migrate input should be the identifier or full name of a volume and they should all belong to the same source aggregate. Returning the list of candidate aggregates for migration can potentially take some time to compute.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with aggregate-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to aggregate-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Modify an aggregate's information. If modifying of one property fails, nothing will be changed.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified aggregate.
- EINVALIDINPUT - When invalid input specified.
- EOBJECTNOTFOUND - When the aggregate-name-or-id does not correspond to an aggregate.
- EDATABASEERROR - On database error.
Inputs
- is-dp-ignored => boolean, optional
True if an administrator has chosen to ignore this object for purposes of data protection.
Outputs
Add a space management operation as part of this space management session. The actual operation is not carried out unless aggregate-space-management-commit is called. If this operation cannot be carried out, then ESPACEMGMTCONFLICTOP will be returned The following rules apply when adding an operation to the session. - A session cannot have 2 operations of same type for a give volume. For example two volume resize operations cannot be added for the same volume to a session. If this check fails the api will return ESPACEMGMTCONFLICTOP error.
- If a volume migration operation is added to a session then other session management operations cannot be added to the session. Similarly a volume migration cannot be added to a session if other type of operation is already added to session. If this check fails the api will return ESPACEMGMTCONFLICTOP error.
If this operation leads to errors in dry-run-results that are returned, then this operation is not added to the session.Inputs
Outputs
- aggregate-diff-space-infos => aggregate-space-info[]
Returns the difference in aggregate space consumption (i.e used space and committed space) due to this space management operation only on the session aggregate as well as other aggregates affected by this operation (for example volume migration affects both session aggregate and destination aggregate.) The values returned in aggregate-space-info can be positive or negetive depending on whether the operation consumes or frees up space on the aggregates. For example volume migration frees up space on the source aggregate and consumes space on the destination aggregate.
Open a space management session to run space management operations on an aggregate. This allows adding a set of space management operations in a session, getting the difference in space consumption due to these set of operations and then committing all the operations. A space management session must be started before invoking the following ZAPIs:
- aggregate-space-management-add-operation
- aggregate-space-management-remove-operation
- aggregate-space-management-commit
- aggregate-space-management-rollback
Use aggregate-space-management-commit to commit the changes and to start jobs which will carry out the space management operations. Use aggregate-space-management-rollback to rollback the session. This will not submit any jobs for space management operations.
After 24 hours, a session can be opened on the same aggregate by another client without the force option. This will cause any space management operations that were part of the session to be discarded.
Inputs
- force => boolean, optional
By default, force is false. If true, and a space management session is already in progress on the specified aggregate the previous space management session is rolled back and a new edit session is begun.
Outputs
- space-management-session-id => integer
Identifier of the space management session. Range: [1..2^31-1]
Commit the space management operations added as part of this space management session. The session that was opened on the aggregate will be released once all the space management jobs that were part of the session are queued to be executed eventually.
This will not wait for the jobs to be executed.
Use the dry-run option to test the commit. It returns a set of dry-run results for each space management operation that was added as part of this session.
dry-run-results is a set of steps that the server will take to carry out the space management operation
When dry-run is true, it also returns the projected used and committed space of the aggregate on which the space management session was opened and other dependent aggregates in case of migration operations.
If dry-run is false, then before the call returns, the system submits jobs to the provisioning engine to execute.
Inputs
- dry-run => boolean, optional
If true, return the dry-run-results list for each space management operation and aggregate-space-infos which gives the projected used and committed space of the aggregate(s) as a result of space management operations added to the session. The dry-run-results list contains actions that would be taken should the changes be committed without actually committing the changes.
The session is not released after a dry run. By default, dry-run is false.
- space-management-session-id => integer
Identifier of the space management session on an aggregate Range: [1..2^31-1]
Outputs
- aggregate-space-infos => aggregate-space-info[], optional
Returns overall effect of space management operations added to the session on the used and committed space of all affected aggregates (i.e aggregate on which the space management session is started and the aggregates affected by space management operations added to the session. For example migration effects two aggregates, session aggregate and destination aggregate). The space effect is calculated by obtaining the space freed or consumed by each operation and by adding or subtracting it from the current used and committed space of the aggregate.
Returned only if dry-run is true.
Remove a space management operation that was added as part of this space management session.Inputs
- space-management-session-id => integer
Identifier of the space management session on an aggregate from which this space management operation has to be removed. Range: [1..2^31-1]
Outputs
- aggregate-diff-space-infos => aggregate-space-info[]
Returns difference in aggregate space consumption (i.e used space and committed space) caused by removing this space magagement operation from the session. The values returned here are same as the values returned by aggregate-space-management-add-operation zapi (called when adding this operation to the session) but with the signs reversed.
Release the session opened for space management on an aggregate. All the space management operations that were submitted as part of the space management session would be discarded.Inputs
- space-management-session-id => integer
Identifier of the space management session on an aggregate to be rolled back. Range: [1..2^31-1]
Outputs
Create a DFM alarm. The alarm-info element specifies all the parameters of the new alarm. Note that it is possible to specify a combination of the event-name, event-severity and event-class such that this alarm will never be triggered. It is the user's responsibility to verify these settings are useful.
Error Conditions:
- EINVALIDEMAILADDR - If the email address has spaces, semi-colons or unprintable characters.
- EINVALIDEVENTSEVERITY - If the event-severity specified is not one of 'normal', 'information', 'unknown', 'warning', 'error', 'critical', 'emergency'.
- EINVALIDEVENTCLASSEXP - If the regular expression specified for the event-class is not a valid POSIX.1 regular expression.
- EINVALIDALARMTIME - If the time-specified for time-from or time-to is greater than 86399 ((24 * 60 * 60) - 1) seconds or if only one of time-from and time-to values are specified.
- EINVALIDTRAPADDR - If the trap host specified is not reachable or when the port is greater than 65535 (2^15 - 1)
- ENOSUCHEVENT - If the event name specified is not a valid event-name of DFM.
- EGROUPDOESNOTEXIST - If the group specified in the alarm parameters is not a DFM resource group.
- ENOTFOUNDUSER - If the administrator name specified in the alarm parameters is not a DFM administrator.
- ERUNASUNSUPPORTED - If script-runas parameter is specified on a Windows DFM platform.
- EINVALIDINPUT - If the event-severity value is less than the severity of the event-name event.
- EDATABASEERROR - A database error occured during processing.
- EACCESSDENIED - When the user does not have DFM.Event.Write capability on the resource group on which the alarm is being configured. It is also set when the user sets script-runas parameter for the alarm and he does not have global DFM.Database.Write capability.
Inputs
Outputs
Delete an alarm. Error Conditions:
- EALARMDOESNOTEXIST - If an alarm by the specified id does not exist.
- EDATABASEERROR - A database error occured during processing.
- EACCESSDENIED - When the user does not have DFM.Event.Write capability on the resource group on which the alarm being destroyed is configured.
Inputs
- alarm-id => integer
Identifier of the alarm. Range: [1..2^15-1]
Outputs
Get the default values of attributes defined by this ZAPI set.Inputs
Outputs
Ends listing of alarms.Inputs
- tag => string
Tag returned from alarm-list-info-iter-start.
Outputs
Returns items from list generated by alarm-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
The tag returned in alarm-list-info-iter-start call.
Outputs
List all configured alarms.Inputs
Outputs
- records => integer
Number indicating how many items are available for future retrieval with alarm-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to alarm-list-info-iter-next or alarm-list-info-iter-end.
Modify settings of a DFM alarm. Error Conditions:
- EALARMDOESNOTEXIST - If an alarm by the specified id does not exist.
- EINVALIDEMAILADDR - If the email address has spaces, semi-colons or unprintable characters.
- EINVALIDEVENTSEVERITY - If the event-severity specified is not one of 'normal', 'information', 'unknown', 'warning', 'error', 'critical', 'emergency'.
- EINVALIDEVENTCLASSEXP - If the regular expression specified for the event-class is not a valid POSIX.1 regular expression.
- EINVALIDALARMTIME - If the time-specified for time-from or time-to is greater than 86399 ((24 * 60 * 60) - 1) seconds or if only one of time-from and time-to values are specified.
- EINVALIDTRAPADDR - If the trap host specified is not reachable or when the port is greater than 65535 (2^15 - 1)
- ENOSUCHEVENT - If the event name specified is not a valid event-name of DFM.
- EGROUPDOESNOTEXIST - If the group specified in the alarm parameters is not a DFM resource group.
- ENOTFOUNDUSER - If the administrator name specified in the alarm parameters is not a DFM administrator.
- ERUNASUNSUPPORTED - If script-runas parameter is specified on a Windows DFM platform.
- EINVALIDINPUT - If the event-severity value is less than the severity of the event-name event.
- EDATABASEERROR - A database error occured during processing.
- EACCESSDENIED - When the user does not have DFM.Event.Write capability on the resource group on which the alarm is being configured. It is also set when the user sets script-runas parameter for the alarm and he does not have DFM.Database.Write capability at global level.
Inputs
- alarm-info => alarm-info
Parameters to be modfied for the alarm. Any value specified in alarm-info replaces values configured for the alarm. Specifying any optional attributes with a blank value removes that setting from the alarm. Specifying an empty array for attributes that take an array will remove all the entries. Specifying an array element with values replaces the existing values. If an array element is not specified, then no change happens. If none of the optional parameters are specified, then this API does nothing. Note that it is possible to specify a combination of the event-name, event-severity and event-class such that this alarm will never be triggered. It is the user's responsibility to verify these settings are useful.
Outputs
Test an alarm by performing it's trigger actions. The test will be performed irrespective of whether the alarm is enabled or disabled. Error Conditions:
- EALARMDOESNOTEXIST - If an alarm by the specified id does not exist.
- EALARMTESTFAILED - If there is an error sending the test alarm event to DataFabric Manager Event service.
- EDATABASEERROR - A database error occured during processing.
- EACCESSDENIED - When the user does not have DFM.Event.Write capability on the resource group on which the alarm is configured.
Inputs
- alarm-id => integer
Identifier of the alarm. Range: [1..2^15-1]
Outputs
Proxy an API request to a third party and return the API response.Inputs
- target => string
The target host. May be a hostname (qualified or unqualified), a vfiler name or a host agent. If the target is not resolved during a capability check, EOBJECTAMBIGUOUS or EOBJECTNOTFOUND is returned.
- username => string, optional
User account to use for executing the API. If none is specified, the highest privilege available will be attempted. The proxy server may have a security policy that restricts the accepted values for this field. Invalid values will cause EACCESSDENIED.
Outputs
Create a new application policy by making a copy of an existing policy. The new policy created using this ZAPI has the same set of properties as the existing policy.
Error conditions: - EACCESSDENIED - User does not have privileges to read the existing policy from the database, or create a new policy, or both.
- EOBJECTNOTFOUND - No existing policy was found that has the given name or ID.
- EOBJECTAMBIGUOUS - Multiple objects with the given name present in database.
- EPOLICYEXISTS - A policy with the given application-policy-name already exists.
- EDATABASEERROR - A database error occurred while processing the request.
- EINVALIDINPUTERROR - Invalid input was provided.
Inputs
- application-policy-description => string, optional
Description of the new policy. It may contain from 0 to 255 characters. If the length is greater than 255 characters, the ZAPI fails with error code EINVALIDINPUTERROR. The default value is the empty string "".
- group-name-or-id => obj-name-or-id, optional
Resource group to which the newly created application policy should be added to. User should have DFM.ApplicationPolicy.Write capability on the specified group. Default value: Global group.
Outputs
This API creates a new application policy. Error conditions: - EACCESSDENIED - User does not have privileges to create policies.
- EDATABASEERROR - A database error occurred while processing the request.
- EINVALIDINPUTERROR - Invalid input was provided.
- EPOLICYEXISTS - An application policy with given name already exists.
Inputs
Outputs
- application-policy-id => obj-id
Object ID of the newly created policy.
Destroy a application policy. This removes it from the database. If the policy has been applied to any dataset nodes, then the destroy operation fails; it must first be disassociated from all the dataset nodes to which it has been associated and then destroyed. Error conditions:
- EACCESSDENIED - User does not have DFM.Policy.Delete on the policy being destroyed.
- EOBJECTNOTFOUND - The specified application policy does not exist in the database.
- EAPPPOLICYINUSE - The policy is assigned to one or more datasets.
- EDATABASEERROR - A database error occurred while processing the request.
- EOBJECTAMBIGUOUS - Multiple objects with the given name present in database.
- EEDITSESSIONINPROGRESS - The application policy being destroyed locked in an edit session.
Inputs
- force => boolean, optional
force delete even if there is an edit session in progress on the application policy.
Outputs
Create an edit session and obtain an edit lock on an application policy to begin modifying the policy. An edit lock must be obtained before invoking application-policy-modify.
Use application-policy-edit-commit to end the edit session and commit the changes to the database.
Use application-policy-edit-rollback to end the edit session and discard any changes made to the policy.
24 hours after an edit session on a policy begins, any subsequent call to application-policy-edit-begin for that same policy automatically rolls back the existing edit session and begins a new edit session, just as if the call had used the force option. If there is no such call, the existing edit session simply continues and retains the edit lock.
Error conditions: - EEDITINPROGRESS - Another edit session already has an edit lock on the specified application policy.
- EOBJECTNOTFOUND - No application policy was found that has the given name or ID.
- EACCESSDENIED - User does not have DFM.Policy.Write privilege on the policy. modify the application policy.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- application-policy-name-or-id => obj-name-or-id
Name or ID of an application policy.
- force => boolean, optional
If true, and an edit session is already in progress on the specified policy, then the previous edit is rolled back and a new edit is begun. If false, and an edit is already in progress, then the call fails with error code EEDITINPROGRESS. Default value is false.
Outputs
Commit changes made to an application policy during an edit session into the database. If all the changes to the policy are performed successfully, the entire edit is committed and the edit lock on the policy is released.
If any of the changes to the policy are not performed successfully, then the edit is rolled back (none of the changes are committed) and the edit lock on the policy is released.
Use the dry-run option to test the commit. Using this option, the changes to the policy are not committed to the database.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have DFM.Policy.Write on the policy.
- EPOLICYEXISTS - The policy's name is being changed, and a policy with the new name already exists.
- EDATABASEERROR - A database error occurred while processing the request.
- EAPPPOLICYNOTFOUND - An application policy was not found.
Inputs
- dry-run => boolean, optional
If true, return a list of the actions the system would take after committing the changes to the policy, but without actually committing the changes. In addition, the edit lock is not released. By default, dry-run is false.
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by application-policy-edit-begin ZAPI.
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Only returned if dry-run is true.
Roll back changes made to an application policy. The edit lock on the policy will be released after the rollback.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
Inputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by application-policy-edit-begin ZAPI.
Outputs
Terminate a list iteration that had been started by a call to application-policy-list-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- tag => string
The opaque handle returned by the prior call to application-policy-list-iter-start that started this list iteration.
Outputs
Retrieve the next series of policies that are present in a list iteration created by a call to application-policy-list-iter-start. The server maintains an internal cursor pointing to the last record returned. Subsequent calls to application-policy-list-iter-next return the next maximum records after the cursor, or all the remaining records, whichever is fewer.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- maximum => integer
The maximum number of policies to return. Range: [1..2^31-1].
- tag => string
The opaque handle returned by the prior call to application-policy-list-iter-start that started this list iteration.
Outputs
- records => integer
Number of records actually returned in the output. Range:[0..2^31-1]
Begin a list iteration over application policies. After calling application-policy-list-iter-start, you continue the iteration by calling application-policy-list-iter-next zero or more times, followed by a call to application-policy-list-iter-end to terminate the iteration.
Error conditions: - EACCESSDENIED - User does not have privileges to read the specified policy.
- EOBJECTNOTFOUND - No policy or resource group was found that has the given name or ID.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- object-name-or-id => obj-name-or-id, optional
Name or identifier of a application policy or resource group. If a resource group is specified, only the application policies which are members of the group are returned. If application policy name or ID is specified, then application-policy-type filter is ignored.
Outputs
- records => integer
Number of items present in the list iteration. Range:[0..2^31-1].
- tag => string
An opaque handle used to identify the list iteration. The list content resides in a temporary store in the server.
This ZAPI modifies the application policy settings of an existing policy in the database with the new values specified in the input. Note: type of application policy cannot be modified after creation. Before modifying the policy, an edit lock has to be obtained on the policy object.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EEDITSESSIONCONFLICTINGOP - current modification made conflicts with previous change in the edit session.
- EACCESSDENIED - User does not have privileges to modify the policy.
- EOBJECTNOTFOUND - The policy was already destroyed during this edit session.
- EOBJECTAMBIGUOUS - Multiple objects with the given name present in database.
- EINVALIDINPUT - The requested modification is not applicable to the policy being modified.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- application-policy-info => application-policy-info
New values for application policy attributes. All the policy attributes are replaced by the new values. If an optional element is absent, then it gets replaced by its default value.
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by application-policy-edit-begin.
Outputs
Log an entry in the audit log file of DataFabric ManagerInputs
- audit-priority => string
Specifies the priority of the audit log entry. Possible values: emergency, critical, error, warning and information
- audit-type => string
Specifies the type of the event being logged. Possible values: input, error, action and output
Outputs
Terminate a view list iteration and clean up any saved info.Inputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Returns items from a previous call to cifs-domain-list-info-iter-startInputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned.
Initiates a query for a list of cifs domains on hosts discovered by DFM.Inputs
Outputs
- records => integer
Number indicating how many items are available for future retrieval with cifs-domain-list-info-iter-next. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store. Used in subsequent calls to cifs-domain-list-info-iter-next. or cifs-domain-list-info-iter-end.
Remove one or all name/value pairs from the persistent store.Inputs
- match => string, optional
Specify how the API should determine which name/value pairs to remove. Use "exact", the default, to specify that the option name must exactly match "option". The API returns ENAMENOTFOUND if an option by that name does not exist. Use "all" to indicate that the API should ignore the option name and return all name/value pairs. Use "regexp" to indicate that the API should interpret the option as an extended regular expression, and return all name/value pairs where the option name matches the pattern specified in "option". The API returns EINVALIDINPUTERROR if the "option" is not a valid regular expression.
Outputs
Retrieve one or all name/value string pairs.Inputs
- application-name => string
Unique name identifying the application requesting the data.
- match => string, optional
Indicate how the API should find matching options. Use "exact", the default, to specify that the option name must exactly match "option". The API returns ENAMENOTFOUND if no option with that name exists for this application. Use "all" to indicate that the API should ignore the option name and return all name/value pairs. Use "regexp" to indicate that the API should interpret the option as an extended regular expression, and return all name/value pairs where the option name matches the pattern specified in "option". In the case of "regexp", the API returns EINVALIDINPUTERROR if the "option" is not a valid regular expression.
- option => string, optional
Name of the registry key to fetch. See the description of the "match" parameter for more information.
Outputs
Store one or more name/value string pairs.Inputs
- application-name => string
Unique name identifying the application requesting storage.
Outputs
Create a comment field.
Error conditions: - ECOMMENTFIELDALREADYEXISTS - A comment field by the same name already exists.
- EDATABASEERROR - A database error occurred while processing the request.
- EINVALIDINPUT - Invalid input was provided.
Inputs
Outputs
Destroy a comment field.
Error conditions: - ECOMMENTFIELDDOESNOTEXIST - A comment field by the specified id does not exist.
- ESYSTEMCOMMENTFIELD - An attempt was made to destroy a system comment field.
- EDATABASEERROR - A database error occured while processing the request.
Inputs
Outputs
Terminate a list iteration that had been started by a call to comment-field-list-info-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- tag => string
The opaque handle returned by the prior call to comment-field-list-info-iter-start that started this list iteration.
Outputs
Get the next set of comment fields in the iteration started by comment-field-list-info-iter-start.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- maximum => integer
The maximum number of comment fields to return. Range: [1..2^31-1].
- tag => string
The opaque handle returned by the prior call to comment-field-list-info-iter-start that started this list iteration.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list all comment fields.
Error conditions: - ECOMMENTFIELDDOESNOTEXIST - A comment field by the specified id or name does not exist.
- EDATABASEERROR - A database error occured while processing the request.
Inputs
- comment-field-name-or-id => comment-field-name-or-id, optional
If specified, only the specified comment field is returned. Otherwise information about all comment fields are returned.
Outputs
- records => integer
Number of items present in the list iteration. Range: [0..2^31-1].
- tag => string
An opaque handle used to identify the list iteration. The list content resides in a temporary store in the server.
Modify a comment field. Currently only the name of the comment field can be modified.
Error conditions: - ECOMMENTFIELDDOESNOTEXIST - A comment field by the specified id does not exist.
- ECOMMENTFIELDALREADYEXISTS - A comment field by the same name already exists.
- ESYSTEMCOMMENTFIELD - An attempt was made to modify a system comment field.
- EDATABASEERROR - A database error occured while processing the request.
- EINVALIDINPUT - Invalid input was provided.
Inputs
- comment-field-name-or-id => comment-field-name-or-id
Name or identifier of the comment field which has to be modified.
- comment-field-object-types => comment-field-object-type[], optional
Object type(s) of the comment field to be modified. If an object type of the comment field does not exist, then the new object type will be associated with the comment field. Any existing object type of the comment field that is not specified in this input will be disassociated. If the input is empty, then all object types of the comment field will be disassociated.
Outputs
Terminate a list iteration that had been started by a call to comment-field-values-list-info-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- tag => string
The opaque handle returned by the prior call to comment-field-values-list-iter-start that started this list iteration.
Outputs
Get the next set of comment field values in the iteration started by comment-field-values-list-iter-start
Error conditions: - EINVALIDTAG - The specified tag does not exist.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- maximum => integer
The maximum number of comment field values to return. Range: [1..2^31-1].
- tag => string
The opaque handle returned by the prior call to comment-field-values-list-iter-start that started this list iteration.
Outputs
- records => integer
The number of records actually returned.
Start iteration on listing comment field values for objects.
Error conditions: - ECOMMENTFIELDDOESNOTEXIST - A comment field by the specified id or name does not exist.
- EDATABASEERROR - A database error occured while processing the request.
- EACCESSDENIED - The user does not have DFM.Database.Read capability on the specified object. If the object is a dataset, then the user does not have DFM.Dataset.Read on the dataset.
- EOBJECTNOTFOUND - The specified object was not found.
- EOBJECTAMBIGUOUS - The specified object name could refer to more than a single object. Use object identifiers to disambiguate.
Inputs
- comment-field-name-or-id => comment-field-name-or-id, optional
If specified, the values set for various objects for the specified comment field is returned. Otherwise, list the comment field values for all comment fields.
- comment-field-object-types => comment-field-object-type[], optional
Object type(s) of the comment field to be returned.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of the managed object. If specified all comment field values set for that object is returned. Otherwise list the comment field values for all objects.
Outputs
- records => integer
Number of items present in the list iteration. Range: [0..2^31-1].
- tag => string
An opaque handle used to identify the list iteration. The list content resides in a temporary store in the server.
Set the value of a comment field for a specified managed object.
Error conditions: - ECOMMENTFIELDDOESNOTEXIST - A comment field by the specified id or name does not exist.
- EDATABASEERROR - A database error occured while processing the request.
- EACCESSDENIED - The user does not have DFM.Database.Write capability on the specified object.
- EINVALIDINPUT - The specified comment value is invalid.
- EOBJECTNOTFOUND - The specified object was not found or the managed object was not a supported type for setting comments.
- EOBJECTAMBIGUOUS - The specified object name could refer to more than a single object. Use object identifiers to disambiguate.
Inputs
- comment-field-name-or-id => comment-field-name-or-id
Name or identifier of the comment field.
- object-name-or-id => obj-name-or-id
Name or identifier of the managed object for which the value of the comment has to be set. The supported object types for which a comment value can be set are: - dataset
- host
- volume
- qtree
- lun_path
- quota_user
- resource_group
- aggregate
- srm_path
- resource_pool
- dp_policy
- dp_schedule
- dp_throttle
- prov_policy
- ossv_directory
Outputs
Add members to an existing dataset. This API is for adding direct member objects to one or more storage sets in the dataset. Each storage set is identified by the data protection policy node associated with it.
The types of storage objects allowed to be added vary:
- Volumes, qtrees and OSSV directories are allowed in the storage set attached to a primary policy node.
- Only volumes are allowed in a storage sets attached to secondary/destination nodes.
It is legal to add storage objects to multiple storage sets in a single call.Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
- is-existing-member-ok => boolean, optional
If specified, the call will not return an error if one or more objects being added is already a member of the specified node. If an ancestor or child of an object is already a member, an error will still be returned. By default, is-existing-member-ok is false.
Outputs
Add dynamic reference to an existing dataset. By adding a dynamic reference, the volumes, qtrees and ossv directories referred to by the dynamic reference become implicit (indirect) members of the dataset. A dynamic reference can be added to a particular storage set in the dataset by specifying the data protection policy node associated with it. If necessary a new storage set is first created automatically before adding storage objects to it.
The types of storage objects allowed to be added vary:
- Storage systems, vFiler units, aggregates, OSSV hosts, virtualization objects can be attached to a primary node.
- Storage systems, vFiler units and aggregates are allowed in storage sets attached to secondary/destination nodes.
It is legal to add storage objects to multiple storage sets in a single call.Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Add resource pool to a single storage set that is part of a dataset. The storage set is specified implicitly by the name of the policy node that maps to it. Within the same edit session in which you call dataset-add-resourcepool, you may also add or remove members or dynamic references, or change the dataset's name, description, is-dp-ignored, is-dp-suspended, or is-suspended. You may not change the data set's protection policy or storage sets within the same edit session.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given identifier.
- EACCESSDENIED - User does not have permission to modify the storage set or does not have DFM.ResourcePool.Provision on the specified resource pool.
- EOBJECTAMBIGUOUS - The name given for the storage set or resource pool matches multiple storage sets or resource pools.
- EOBJECTNOTFOUND - No storage set or resource pool was found that has the given name or identifier.
- EDATABASEERROR - A database error occurred while processing the request.
- ESTORAGESETNOTINDATASET - The specified storage set is not a part of the dataset being edited.
- EDPPOLICYNODENOTFOUND - No DP policy node was found that has the given name.
- EEDITSESSIONCONFLICTINGOP - The requested modification conflicts with other changes performed during the edit session.
- EDSCONFLICTALREADYINDATASET - Object, its ancestor or descendent is already in dataset.
- EDSCONFLICTSRCDESTINSAMEAGGR - Source data and destination data are in the same aggregate.
- EDSCONFLICTALREADYINRESPOOL - Object, its ancestor or descendent is already in resource pool.
Inputs
- dp-node-name => dp-policy-node-name, optional
Name of the node in the data protection policy that maps to the storage set. dp-node-name must match exactly the name of one of the nodes in the data protection policy that is currently assigned to the data set. If dp-node-name is not specified, then the storage set associated with the root node is modified.
- edit-lock-id => integer
Identifier of the edit lock for a dataset that was obtained by an earlier call to dataset-edit-begin. Range: [1..2^31-1]
Outputs
Add datastore objects to a dataset's exclusion list. This API is for adding datastore objects to a dataset's exclusion list.
Users may want to exclude a datastore when adding a virtual machine to the dataset. Virtual Machine may be on multiple datastores and one of the datastores will contain swap files which don't need to be backed up. If a datastore is excluded, relationships and backups for that datastore will not be created. If a relationship already exists, it will be left alone and only the backups will stop. There is no impact on suspend/resume.
Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Begin failover of a dataset. This API begins the process of failing over a dataset to its disaster recovery storage. The failover process will break any mirror relationships between the primary and DR storage objects and make the secondary storage available for use.
This API may only be invoked if the DR state of the data set is "ready".
Specifically, when the ZAPI runs, the following actions will take place:
- The DR state of the dataset will immediately change to "failing_over".
- Any pending conformance tasks will be cancelled.
- Any jobs running against this dataset will be aborted.
- A failover job will be started to break any mirrors between the primary storage set and the DR storage set
- Finally, the DR state of the dataset will be updated to either "failed_over" or "failover_error" based on whether the failover job succeeded or not.
Inputs
Outputs
- job-id => integer
Identifier of the failover job that is failing the dataset over to its DR secondary. If there was an error, the job id will be 0. Range: [1..2^31-1]
Begin test failover of a dataset that only runs the the failover scripts and does not dequeue tasks, abort jobs or change the dr-state.Inputs
- dataset-name-or-id => string
Name or ID of dataset.
Outputs
- job-id => integer
Identifier of the failover job that is just running the DR scripts for the dataset. If there was an error, the job id will be 0. Range: [1..2^31-1]
Change the disaster recovery state of a dataset. This API sets the DR state of a dataset to a new value. Users may perform the following DR state transitions using this ZAPI:
- From "failover_error" to "failed_over".
- From "failover_error" to "ready".
- From "failed_over" to "ready".
If the "allow-internal-transitions" element is present and true, the caller may make additional state transitions:
- From "ready" to "failing_over"
- From "failing_over" to "failed_over"
- From "failing_over" to "failover_error"
These state transitions are intended to be used by Protection Manager server processes as part of a failover or failback sequence. Any attempt to perform other state transition will fail with an error code of EDATASETWRONGDRSTATE. The DR state of a dataset will also change in one special case:
- The dataset is in a "failed_over" DR state
- The caller changes the data protection policy
- The caller maps the storage set for the DR node of the old policy to the root node of the new policy (using dataset-set-storageset)
When the edit session with these modifications is committed, the DR state will be set back to "ready".Inputs
Outputs
This API computes space and/or IO usage metrics for a dataset. dataset-space-metric-list-info-iter-* set of APIs can be used to retrieve the computed space metrics and dataset-io-metric-list-info-iter-* set of APIs can be used to retrieve the computed IO metrics.Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of a dataset for which the usage metric has to be computed.
- timestamp => dp-timestamp
Time upto which usage metrics for the dataset needs to be computed. The timestamp value should not be in future. Range: [1..2^31-1]
Outputs
Begin a conformance run on a dataset, to attempt to bring it into conformance with its data protection policy and provisioning policy. A conformance run consists of two main steps: - Perform a conformance check to determine both the dataset's current conformance status, and the set of actions needed to bring the dataset into conformance.
- Perform the conformance actions needed to bring the dataset into conformance.
In addition, the dataset's conformance status is updated at various points during the conformance run. Whenever the conformance status is updated, an event of type "dataset.conformance" is generated. Successful completion of this ZAPI indicates that: the conformance check has completed successfully, the dataset's conformance status has been updated based on the results of the conformance check, and the system has begun to take any needed conformance actions.
After the ZAPI returns, the system continues to perform conformance actions in the background, until all actions complete. Once all actions have completed, the dataset's conformance status is again updated. Note that at present, there is no ZAPI interface for determining when all actions have completed.
If no policy has been assigned to the dataset, the conformance run completes immediately and performs no conformance actions.
Error conditions: - EACCESSDENIED - User does not have privileges to access the dataset.
- EOBJECTNOTFOUND - No dataset was found that has the given name or ID.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- assume-confirmation => boolean, optional
Value determining whether confirmation is given for all resolvable conformance actions that require user confirmation. One key and sometimes undesirable resolvable conformance action is the possible re-baseline of one or more relationships. If the value is true, all conformance actions which require user confirmation will be executed as if confirmation is already granted. If the value is false, all conformance actions which require user confirmation will not be executed. Default value is false.
- dataset-name-or-id => obj-name-or-id
Name or object ID for the dataset.
Outputs
Create a new, empty dataset.Inputs
- application-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the application policy to associate with this dataset.
An application policy can only be attached to an empty dataset or a dataset containing only virtualization objects.
Once an application policy is attached to a dataset, user can no longer assign volumes or qtrees to the root node of that dataset.
Depending on the application policy type, 'vmware' or 'hyperv', only virtualization objects of that type can be added to the dataset. Vmware objects and Hyper-V objects can not be mixed in a single dataset.
- dataset-access-details => dataset-access-details, optional
Data access details for a dataset that needs to be configured and provisioned in a way that it is capable of transparent migration. This enables all the storage in the primary node of the dataset to be migrated to a different physical resource without having to reconfigure the clients accessing this storage.
This value will be ignored if application-policy-name-or-id is specified.
- dataset-name => obj-name
Name of the new dataset. It cannot be all numeric. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned.
- group-name-or-id => obj-name-or-id, optional
Resource group to which the newly created dataset should be added to. User should have DFM.Dataset.Write capability on the specified group. Default value: Global group.
- is-dp-suspended => boolean, optional
Flag indicating whether or not the dataset should be protected. This also indicates whether conformance checking of the dataset is to be done or not. Default is False. Deprecated field. Retained for backward compatibility. This field is deprecated in favour of is-suspended, which suspends the dataset for all automated actions (data protection and conformance check of the dataset).
- is-suspended => boolean, optional
True if an administrator has chosen to suspend this dataset for all automated actions (data protection and conformance check of the dataset). Default is False. If present, this field takes precedence over is-dp-suspended.
- primary-volume-name-format => primary-volume-name-format, optional
Name format string for the primary volumes created by Provisioning Manager. This element is optional. If the value of this element is empty string "", then the global naming option will be used to create names for the primary volumes. Volume name can only be 60 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the volume name. Volume name can have ASCII alphabets, ASCII numbers and underscore '_'. Volume name cannot start with a number. All other characters will be converted to character 'x'. By default, the value of the option would be empty string.
- protection-policy-id => obj-id, optional
Identifier of the protection policy to associate with this dataset. This legacy parameter is only used if protection-policy-name-or-id is not supplied. The dataprotection license is required for this input.
- protection-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the protection policy to associate with this dataset. This input is preferred over protection-policy-id and if supplied then do not supply protection-policy-id because protection-policy-id will be ignored. The dataprotection license is required for this input.
- provisioning-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the provisioning policy to be associated with the primary node of the dataset. The members of the primary node are provisioned based on this policy. Once the provisioning policy is associated with the dataset node, the storage in the node is periodically monitored for conformance with the policy.
- requires-non-disruptive-restore => requires-non-disruptive-restore, optional
Specifies whether the dataset should be configured to enable non-disruptive restores from backup destinations.
This value will be ignored if application-policy-name-or-id is specified. Non-disruptive-restore is not supported for datasets containing virtualization objects (virtual machines and datastores).
Default value is false.
- secondary-qtree-name-format => secondary-qtree-name-format, optional
Name format for the secondary qtrees created by Protection Manager. This element is optional. If the value of this element is empty string "", then the global naming option will be used to create names for qtrees Qtree name can only be 60 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the qtree name. Qtree name can have ASCII alphabets, ASCII numbers, hypen '-', dot '.' and underscore '_'. All other characters will be converted to character 'x'. By default, the value of the option would be empty string. This option does not apply to names of primary qtrees..
- secondary-volume-name-format => secondary-volume-name-format, optional
Naming format string for the secondary volumes created by Protection Manager. This element is optional. If the value of this element is empty string "", then the global naming option will be used to create names for the secondary volumes. Volume name can only be 60 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the volume name. Volume name can have ASCII alphabets, ASCII numbers and underscore '_'. Volume name cannot start with a number. All other characters will be converted to character 'x'. By default, the value of the option would be empty string.
- snapshot-name-format => snapshot-name-format, optional
Name format string for snapshots created by Protection Manager and Host Service's plug-ins. This element is optional. If the value of this element is empty string "", then the global naming option will be used to created names for snapshot. If %A is not specified as one of the substitution tokens in the format, plug-ins creating snapshots for application datasets will implicitly add %A at the end of the format string and the snapshot name will have application fields in them. Snapshot name length can only be 124 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the snapshots. Snapshot name can have ASCII alphabets, ASCII numbers, underscore '_', hyphen '-', plus sign '+' and a dot '.'. All other characters will be converted to character 'x'. By default, the value of the option is an empty string.
- timezone-name => string, optional
Timezone to assign to the root node. If specified, the value must be a timezone-name returned by timezone-list-info-iter-next. If no timezone is assigned, then the default system timezone will be used.
- vfiler-name-or-id => obj-name-or-id, optional
Name or identifier of the vFiler unit to be attached to the primary node of the dataset. If a vFiler unit is attached, then all members provisioned into this node will be exported over this vFiler unit. If an application policy is assigned to this dataset, this parameter will be ignored.
This value will be ignored if application-policy-name-or-id is specified.
- volume-qtree-name-prefix => string, optional
Prefix for volume and qtree names, up to 60 characters long. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned
Outputs
Destroy a dataset. The dataset must be empty unless the "force" option is used.Inputs
- cancel-edit-sessions => boolean, optional
Specifies if edit operations in progress on the dataset should be cancelled. If true, any edit operations in progress on the dataset are rolled back before destroying the dataset.
If cancel-edit-sessions is false, dataset with pending edit operations cannot be destroyed.
Default is false.
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset to destroy.
- force => boolean, optional
If true, allows destroying a dataset that has members or dynamic references. By default, only empty datasets can be destroyed.
Outputs
Ends iteration of the dynamic references in the dataset.Inputs
- tag => string
Tag from a previous dataset-dynamic-reference-list-info-iter-start.
Outputs
Get next records in the iteration started by dataset-dynamic-reference-list-info-iter-startInputs
- maximum => integer
The maximum number of records to retrieve.
- tag => string
Tag from a previous dataset-dynamic-reference-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list the dynamic references in the dataset. Volumes are not considered dynamic references because they are members. (even though they can contain qtrees)Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of dataset whose dynamic references will be listed.
- dp-node-name => dp-policy-node-name, optional
List only the members in this policy node. If none is specified, then list members in all policy nodes. If both dp-node-name and storageset-name-or-id are specified, then the value of dp-node-name is ignored.
- dynamic-reference-type => string, optional
Type of dynamic reference. Possible values are 'filer', 'vfiler', 'aggregate', 'ossv_host' or 'app_object'. If present, only dynamic references of the specified type are returned.
- include-deleted => boolean, optional
If present and true, dynamic references which are marked as deleted in the database are also returned. Otherwise, deleted dynamic references are not returned.
- include-is-available => boolean, optional
If true, the is-available status is calculated for each member which may make the call to this zapi take much longer. Default is false.
- storageset-name-or-id => obj-name-or-id, optional
List only the members in this storage set. If none is specified, then list members in all storagesets mapped to this dataset's nodes. If both dp-node-name and storageset-name-or-id are specified, then the value of dp-node-name is ignored.
Outputs
- records => integer
Number of items that have been saved for future retrieval with dataset-dynamic-reference-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to dataset-dynamic-reference-list-info-iter-next.
Obtain an edit lock to start modifying a dataset. Besides locking the dataset itself, all storage sets in the dataset are locked, as well as the data protection policy if one is assigned.
An edit lock must be obtained before invoking the following ZAPIs:
- dataset-add-member
- dataset-remove-member
- dataset-add-member-by-dynamic-reference
- dataset-remove-member-by-dynamic-reference
- dataset-set-storageset
- dataset-modify
- dataset-modify-node
- dataset-add-resourcepool
- dataset-remove-resourcepool
- dataset-provision-member
- dataset-resize-member
- dataset-member-delete-snapshots
- dataset-add-to-exclusion-list
- dataset-remove-from-exclusion-list
Use dataset-edit-commit to commit the changes to the database. Use dataset-edit-rollback to undo the changes made to the dataset.
After 24 hours, the lock can be taken by another client without the force option. This will cause any edits pending on the aborted session to be lost.
Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset to edit.
- force => boolean, optional
By default, force is false. If true, and an edit is already in progress on the specified dataset or an object the dataset is dependent on (such as a data protection policy), the previous edit session is rolled back and a new edit session is begun.
Outputs
- edit-lock-id => integer
Identifier of the edit lock on the dataset. Range: [1..2^31-1]
Commit changes made to a dataset into the database. The edit lock on the dataset will be released after the changes have been successfully committed.
Use the dry-run option to test the commit. It invokes the conformance checker to return a list of actions that would be taken should the changes be actually committed. The dry-run option also returns a list of high level alerts to notify the user of rebaseline operations or system level issues related to successful conformance.
If dry-run is false, then before the call returns, the system begins a conformance run on the dataset. (See dataset-conform-begin for a description of conformance runs.) If the system is to perform a conformance run on the dataset it will be done with the current dataset edit session value for assume-confirmation. The default value for assume-confirmation is initially true when the edit session begins, but may be altered by certain changes to the dataset made through the use of dataset-modify. The optional assume-confirmation option may be used to specify if user confirmation is to be assumed or not for this dataset-edit-commit. One key, and sometimes undesirable, resolvable action that requires user confirmation is the possible re-baseline of a relationship.
Inputs
- assume-confirmation => boolean, optional
Value determining whether confirmation is given for all resolvable conformance actions that require user confirmation. If the value is true, all conformance actions which require user confirmation will be executed as if confirmation is already granted. If the value is false, all conformance actions which require user confirmation will not be executed. The default value is initially true when the edit session begins, but may be altered by certain changes to the dataset made through the use of dataset-modify. One key, and sometimes undesirable, resolvable action that requires user confirmation is the possible re-baseline of a relationship.
- dry-run => boolean, optional
If true, return the dry-run-results list as well as the conformance-alerts list. The dry-run-results list contains actions that would be taken should the changes be committed without actually committing the changes. The conformance-alerts list contains high level alerts to notify a user of conditions that will impact any attempt to commit the changes. A conformance alert may warn that if the changes are committed, one or more rebaseline operations may be done. The conformance alerts may also warn of conditions that exist that may prevent the succesful conformance of datasets. The edit lock is not released after a dry run. By default, dry-run is false.
- edit-lock-id => integer
Identifier of the edit lock on the dataset. Range: [1..2^31-1]
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- is-provisioning-failure => boolean, optional
This element is returned only if dry-run is true and only when there was a provisioning request issued in the edit session. If the dry run shows errors for the provisioning request its value is TRUE, else FALSE. This element can be used by clients to distinguish the case where provisioning job will succeed, but the dry run contains dataset conformance related errors.
- job-ids => job-info[], optional
Job identifiers of provisioning requests or dataset reexport jobs. This output element is present only if - 1. There were any provisioning requests issued in the edit session.
- 2. The dataset is reexported due to change in provisioning policy.
This is returned only if dry-run is false.
Roll back changes made to a dataset. The edit lock on the dataset will be released after the rollback.Inputs
- edit-lock-id => integer
Identifier of the edit lock on the dataset. Range: [1..2^31-1]
Outputs
Ends iteration of dataset exclusion list.Inputs
- tag => string
Tag from a previous dataset-exclusion-list-info-iter-start.
Outputs
Get next records in the iteration started by dataset-exclusion-list-info-iter-startInputs
- maximum => integer
The maximum number of records to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous dataset-exclusion-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Starts iteration to retrieve the backup exclusion list of the dataset.Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset whose members will be listed.
- include-deleted => boolean, optional
If present and true, members which are marked as deleted in the database are also returned. Otherwise, deleted members are not returned.
- object-name-or-id => obj-name-or-id, optional
Name or id in the DFM database, of a specific member of the dataset's exclusion list. If specified, details of only this object will be returned.
Outputs
- records => integer
Number of records that have been saved for future retieval with dataset-exclusion-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dataset-exclusion-list-info-iter-next.
Ends iteration to list dataset I/O usage metric.Inputs
- tag => string
Tag from a previous dataset-io-metric-list-info-iter-start.
Outputs
Get next few records in the iteration started by dataset-io-metric-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous dataset-io-metric-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1]
Starts iteration to list dataset's I/O usage measurements.Inputs
- day => integer, optional
The day for which the usage metric is required. The month element is required along with this input. When present, the usage metrics for the day is returned. Range: [1..31]
- month => month, optional
The month for which the usage metric is required. If month is not specified, it defaults to the current month. If month element is provided without the day element in input, then the usage metrics available for the month is returned. If month and year element is not provided in input, then the usage measurements for the current month is returned. The start of the current month is determined by the chargebackDayOfMonth global option. Range: [1..12]
- object-name-or-id => obj-name-or-id, optional
Name or identifier of a dataset or resource group or storage service. If a resource group is given, usage measurement of the datasets which are direct members of the group are returned. If a storage service is given, usage measurement of datasets associated with the storage service is returned. If input is not given, usage measurement of all the datasets is returned.
- year => integer, optional
The year for which the usage metric is required. If year element is not provided in input, then the current year is selected. Range: [1900..2^31-1]
Outputs
- records => integer
Number of records that have been saved for future retrieval with dataset-io-metric-list-info-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to dataset-io-metric-list-info-iter-next.
Ends iteration to list datasets.Inputs
- tag => string
Tag from a previous dataset-list-info-iter-start.
Outputs
Get next few records in the iteration started by dataset-list-info-iter-start. If a dataset has a data protection policy assigned to it, the has-protection-policy field will be true. If the client has suspended the dataset, has-protection-policy is still true, but is-dp-suspended and is-suspended fields are also set to true to reflect this. When the client sets is-dp-ignored to true, nothing changes, except that, when the client requests the list of datasets which are not ignored, the ignored datasets will not be returned.
Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous dataset-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list datasets.Inputs
- application-resource-namespace => application-resource-namespace, optional
Namespace of the application resources in the dataset. If specified, then only datasets containing application resources, which belong to this namespace, are returned. Ignored, if include-can-contain-application-resources-only is false. If not specified, when include-can-contain-application-resources-only is true, then all datasets which can contain application resources, are returned.
- has-application-policy => boolean, optional
If true, only list datasets which have an application policy assigned to them. If false, only list datasets which do not have any application policy assigned. If not set, list all datasets.
- has-protection => boolean, optional
If true, only list datasets which have a data protection policy and at least 1 relationship associated with them or the policy has only 1 node. If false, only list datasets which do not have any associated policy or datasets which have policy with more than 1 node but do not have any associated relationship. If not set, list all datasets.
- has-protection-policy => boolean, optional
If true, only list datasets which have a data protection policy assigned to them. If false, only list datasets which do not have any data protection policy assigned. If not set, list all datasets.
- include-conformance-run-reason-details => boolean, optional
If true or omitted and include-conformance-check-results is also true, then include any possible conformance-run-reason-details along with the associated conformance-run-result element. Default value is true. If false, the conformance-run-reason-details, will not be returned.
- include-protection-status-problems => boolean, optional
If true, return the detailed protection-status-problems if the dataset does not have protection status as "protected". Setting include-protection-status-problems to true will generate an error unless only one dataset is specified from the other input parameters. If false or omitted, protection-status-problems, which can be expensive to retrieve, will not be returned.
- is-application-data => boolean, optional
If true, return only datasets managed by an application. If false, return only datasets which are not managed by an application. By default, return all datasets.
- is-dp-ignored => boolean, optional
If true, only list datasets that have been set to be ignored for purposes of data protection. If false, only list datasets that have been not set to be ignored for purposes of data protection. If not specified, list all datasets without taking into account whether they have been ignored or not.
- is-dr-capable => boolean, optional
If true, return only datasets with dp policies that are disaster recovery capable. If false, return only datasets with policies that are not disaster recovery capable. By default, return all datasets.
- is-protected => boolean, optional
If true, only list datasets which have a data protection policy assigned to them. If false, only list datasets which do not have any data protection policy assigned. If not set, list all datasets. Deprecated field. Retained for backward compatibility. This field is deprecated and to be replaced by has-protection-policy.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of a dataset or resource group or provisioning policy or storage service. If a resource group is given, only the datasets which are direct members of the group are returned. If a provisioning policy is given, only those datasets where one or more of its nodes is associated with the provisioning policy are returned. If storage service is given, only those datasets which are associated with the storage service are are returned.
- requires-non-disruptive-restore => requires-non-disruptive-restore, optional
If true, return only datasets requiring non-disruptive restore. If false, return only datasets which do not require non-disruptive restore. By default, return all datasets.
- suppress-status-refresh => boolean, optional
If true, do not refresh the dataset status. If false or omitted, the status of all queried datasets will be re-calculated before being returned.
- volume-qtree-name-prefix => string, optional
Return only datasets with a particular type of custom name prefix. Up to 60 characters long.
Outputs
- records => integer
Number of records that have been saved for future retrieval with dataset-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to dataset-list-info-iter-next.
Abort in-progress dedupe operation on the given volume member of the dataset.Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of dataset.
Outputs
- job-id => integer, optional
Id of the job that handles the deduplication operation. This is returned only if there is an on-demand deduplication job running and will not be present if the deduplication operation was triggered by the storage system.
Range: [1..2^31-1]
Start deduplication operation on specified volume member of the given dataset.Inputs
- dataset-name-or-id => obj-name-or-id
Name or Identifier of the dataset.
Outputs
- job-id => integer
Id of the job that handles the deduplication operation.
Range: [1..2^31-1]
Delete Snapshot copies of a volume which is a member of the effective primary node of the dataset. The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the root node of the dataset. The effective primary of a disaster recovery dataset in the DR state of "failed_over" is the disaster recovery capable node of the dataset.
The deletion of Snapshot copies happens in the background.
A provisioning job id is returned in dataset-edit-commit zapi that represents the job which will delete the specified Snapshot copies. The status of the job can be checked using dp-job-list-iter ZAPIs with the given job-id. Error conditions:
- EACCESSDENIED - User does not have privileges to delete Snapshot copies of the member.
- EDATABASEERROR - A database error occurred while processing the request.
- EOBJECTNOTFOUND - No volume by the given name or identifier was found.
- ENOTINDATASET - Member is not in the primary node of the specified dataset.
- EOBJECTAMBIGUOUS - The specified member name could refer to two or more objects. Try again with the object identifier or object full name.
- EINVALIDINPUT - No Snapshot copies are specified for deletion or both volume-id and volume-name are not given in input or at least one of the Snapshot copies specified for deletion is marked busy.
- ESNAPSHOTNOTFOUND - At least one of the Snapshot copy specified is not found on the volume.
- EEDITSESSIONNOTFOUND - If the edit session specified in input is not found
- EEDITSESSIONOBJECTALREADYLOCKED - If the object being tried to lock in this operation is already locked by some other edit session
- EEDITSESSIONCONFLICTINGOP - If this snapshot deletion operation is conflicting with some other edit operation in the same edit session
- EDATASETNOTINSTABLEDRSTATE - The dataset is not in a stable disaster recovery state.
Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Ends iteration of dataset members.Inputs
- tag => string
Tag from a previous dataset-member-list-info-iter-start.
Outputs
Get next records in the iteration started by dataset-member-list-info-iter-startInputs
- maximum => integer
The maximum number of records to retrieve.
- tag => string
Tag from a previous dataset-member-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list members of the dataset. Dynamic references are not returned by this API, nor are objects only associated with the dataset by their inclusion in a dynamic reference.Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset whose members will be listed.
- dp-node-name => dp-policy-node-name, optional
List only the members in this policy node. If none is specified, then list members in all policy nodes. If both dp-node-name and storageset-name-or-id are specified, then the value of dp-node-name is ignored.
- include-deleted => boolean, optional
If present and true, members which are marked as deleted in the database are also returned. Otherwise, deleted members are not returned.
- include-exports-info => boolean, optional
If true, the export information of members is included in output. If the member is exported over NFS, the NFS export name is retuned in nfs-export-name element in dataset-member-info. If the member is exported over CIFS, the share names are returned in cifs-share-names array in dataset-member-info. In case the members are accessed using both NFS and CIFS (i.e Mixed mode) then both NFS export name and CIFS share names are returned. Default value is false.
- include-indirect => boolean, optional
If true, indirect members are included. By default they are not included. An example of an indirect member is a qtree in a volume which is a direct member of the dataset.
- include-is-available => boolean, optional
If true, the is-available status is calculated for each member which may make the call to this zapi take much longer. Default is false.
- include-space-info => boolean, optional
If true, the space status is computed for all members, the space status and detailed space conditions of every data set member is returned if the space status of the member is "warning" or "error". Default value is false.
- member-id => obj-id, optional
Id, in the DFM database, of a specific member of the dataset. If specified, details of only this object will be returned.
- member-name-or-id => obj-name-or-id, optional
Name or id in the DFM database, of a specific member of the dataset. If specified, details of only this object will be returned. If both member-id and member-name-or-id is specified, then member-id will take precedence over member-name-or-id.
- member-type => string, optional
Type of the member. Possible values for direct members are: "volume", "qtree" or "ossv_directory". When include-exports-info is true in dataset-member-list-info-iter-start, this can also be "lun_path". If present, only members of the specified type are returned.
- storageset-name-or-id => obj-name-or-id, optional
List only the members in this storage set. If none is specified, then list members in all storagesets mapped to this dataset's nodes. If both dp-node-name and storageset-name-or-id are specified, then the value of dp-node-name is ignored.
- suppress-status-refresh => boolean, optional
If true, do not refresh the member status. If false or omitted, the status of all members will be refreshed before being returned.
Outputs
- records => integer
Number of records that have been saved for future retieval with dataset-member-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to dataset-member-list-info-iter-next.
Start undeduplication operation on specified volume member of the given dataset.Inputs
- dataset-name-or-id => obj-name-or-id
Name or Identifier of the dataset.
Outputs
- job-id => integer
Id of the job that handles the undeduplication operation.
Range: [1..2^31-1]
Ends iteration of missing dataset members.Inputs
- tag => string
Tag from a previous dataset-missing-member-list-info-iter-start.
Outputs
Get next records in the iteration started by dataset-member-list-info-iter-startInputs
- maximum => integer
The maximum number of records to retrieve.
- tag => string
Tag from a previous dataset-member-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list members of the dataset that have gone missing. The way the server determines if an object was not intentionally removed from the dataset is: - The object is still a dataset member.
- The object's objDeleted flag is set.
Inputs
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset whose members will be listed.
Outputs
- records => integer
Number of records that have been saved for future retieval with dataset-member-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to dataset-member-list-info-iter-next.
Modify attributes for a dataset.Inputs
- application-info => application-info, optional
This input is used only if is-application-data is true. It contains information about the application which manages this dataset.
- application-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the application policy to associate with this dataset.
An application policy can only be attached to an empty dataset or a dataset containing only virtualization objects.
Once an application policy is attached to a dataset, user can no longer assign volumes or qtrees to the root node of that dataset.
Depending on the application policy type, 'vmware' or 'hyperv', only virtualization objects of that type can be added to the dataset. Vmware objects and Hyper-V objects can not be mixed in a single dataset.
- check-protection-policy-on-commit => boolean, optional
In the default case for a protection policy change, we immediately check the dataset's membership configuration. However, in the case where we are changing the policy and the storage set assignments in the same edit session, the caller can set this flag to true to request that the server postpone the check of the protection policy configuration and root node membership until the call to edit-commit. Default value is false.
- dataset-contact => email-address-list, optional
Contact for the dataset, such as the owner's e-mail address.
- dataset-description => string, optional
Description of the dataset, up to 255 character long.
- dataset-metadata => dfm-metadata-field[], optional
Opaque metadata for dataset. Metadata is usually set and interpreted by an application that is using the dataset. DFM does not look into the contents of the metadata.
- dataset-name => obj-name, optional
Name of the dataset. It cannot be all numeric. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned.
- dataset-owner => string, optional
Name of the owner of the dataset, up to 255 characters long.
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
- is-application-data => boolean, optional
If true, the dataset is an application dataset managed by an external application. Conversion of a non-application dataset to an application dataset is not allowed. The default value is false.
- is-dp-ignored => boolean, optional
True if an administrator has chosen to ignore this dataset for purposes of data protection. Data sets with this attribute set to true are not returned when the client requests datasets which are not ignored. This attribute has no other significance to the system.
- is-dp-suspended => boolean, optional
Flag indicating whether or not the dataset should be protected. This also indicates whether conformance checking of the dataset is to be done or not. Default is False. Should is-dp-suspended go from TRUE to FALSE the edit session assume-confirmation default setting will be be set to FALSE for this dataset. (See dataset-edit-commit for details)
Deprecated field. Retained for backward compatibility. This field is deprecated in favour of is-suspended, which suspends the dataset for all automated actions (data protection and conformance check of the dataset).
- is-suspended => boolean, optional
True if an administrator has chosen to suspend this dataset for all automated actions (data protection and conformance check of the dataset). Default is False. If present, this field takes precedence over is-dp-suspended. Should is-suspended go from TRUE to FALSE the edit session assume-confirmation default setting will be be set to FALSE for this dataset. (See dataset-edit-commit for details)
- primary-volume-name-format => primary-volume-name-format, optional
Name format string for the primary volumes created by Provisioning Manager. This element is optional. If the value of this element is empty string "", then the global naming option will be used to create names for the primary volumes. Volume name can only be 60 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the volume name. Volume name can have ASCII alphabets, ASCII numbers and underscore '_'. Volume name cannot start with a number. All other characters will be converted to character 'x'. By default, the value of the option would be empty string.
- protection-policy-id => obj-id, optional
Identifier of the protection policy to associate with this dataset. This legacy parameter is only used if protection-policy-name-or-id is not supplied. The dataprotection license is required for this input.
- protection-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the protection policy to associate with this dataset. This input is preferred over protection-policy-id and if supplied then do not supply protection-policy-id because protection-policy-id will be ignored. The dataprotection license is required for this input.
- requires-non-disruptive-restore => requires-non-disruptive-restore, optional
Specifies whether the dataset requires a non-disruptive restore of LUNs so that the host need not get detached from the LUN.
This value will be ignored if application-policy-name-or-id is specified. Non-disruptive-restore is not supported for datasets containing virtualization objects.
Default value is false.
- secondary-qtree-name-format => secondary-qtree-name-format, optional
Name format for the secondary qtrees created by Protection Manager. This element is optional. If the value of this element is empty string "", then the global naming option will be used to create names for qtree Qtree name can only be 60 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the qtree name. Qtree name can have ASCII alphabets, ASCII numbers, hypen '-', dot '.' and underscore '_'. All other characters will be converted to character 'x'. By default, the value of the option would be empty string. This option does not apply to names of primary qtrees.
- secondary-volume-name-format => secondary-volume-name-format, optional
Naming format string for the secondary volumes created by Protection Manager. This element is optional. If the value of this element is empty string "", then the global naming option will be used to create names for the secondary volumes. Volume name can only be 60 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the volume name. Volume name can have ASCII alphabets, ASCII numbers and underscore '_'. Volume name cannot start with a number. All other characters will be converted to character 'x'. By default, the value of the option would be empty string.
- snapshot-name-format => snapshot-name-format, optional
Name format string for snapshots created by Protection Manager and Host Service's plug-ins. This element is optional. If the value of this element is empty string "", then the global naming option will be used to created names for snapshot. If %A is not specified as one of the substitution tokens in the format, plug-ins creating snapshots for application datasets will implicitly add %A at the end of the format string and the snapshot name will have application fields in them. Snapshot name length can only be 124 characters long. Name would be truncated if it exceeds that limit. For name collisions, numeric suffix would be added at the end of the snapshots. Snapshot name can have ASCII alphabets, ASCII numbers, underscore '_', hyphen '-', plus sign '+' and a dot '.'. All other characters will be converted to character 'x'. By default, the value of the option is an empty string.
- volume-qtree-name-prefix => string, optional
Prefix for volume and qtree names, up to 60 character long. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned.
Outputs
Modify attributes of a single storage set that is part of a dataset. The storage set is specified implicitly by the name of the data protection policy node that maps to it. You may change the storage set's resource pool and timezone using this call, but not its name or description. Within the same edit session in which you call dataset-modify-node, you may also add or remove members or dynamic references, or change the dataset's name, description, is-dp-ignored, is-dp-suspended, or is-suspended. You may not change the data set's policy or storage sets within the same edit session.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was foundthat has the given identifier.
- EACCESSDENIED - User does not have permission to modify the storage set.
- EOBJECTAMBIGUOUS - The name given for the storage set or resource pool matches multiple storage sets or resource pools.
- EOBJECTNOTFOUND - No storage set or resource pool was found that has the given name or identifier.
- EDATABASEERROR - A database error occurred while processing the request.
- ESTORAGESETNOTINDATASET - The specified storage set is not a part of the dataset being edited.
- EDPPOLICYNODENOTFOUND - No DP policy node was found that has the given name.
- EEDITSESSIONCONFLICTINGOP - The requested modification conflicts with other changes performed during the edit session.
- EINVALIDTIMEZONE - The given timezone name is not valid.
- EDSCONFLICTALREADYINDATASET - Object, its ancestor or descendent is already in dataset.
- EDSCONFLICTSRCDESTINSAMEAGGR - Source data and destination data are in the same aggregate.
- EDSCONFLICTALREADYINRESPOOL - Object, its ancestor or descendent is already in resource pool.
- EDSCONFLICTCANTMIRROROSSV - OSSV is not allowed in mirror source node.
- EDSCONFLICTCANTSNAPSHOTOSSV - Cannot apply snapshot schedule to OSSV.
- EDSCONFLICTINCOMPATIBLEPOLICY - Policy is not compatible with application dataset.
- EDSCONFLICTOSSVNOTALLOWED - OSSV is not allowed in the dataset.
- EDSCONFLICTNOSMLICENSE - There is no SnapMirror license installed.
- EDSCONFLICTNOSMSVLICENSE - There is no SnapMirror or SnapVault license installed. Need to have at least one of them installed.
- EDSCONFLICTROOTNODE - Operation cannot be performed on root node of dataset.
- EDSCONFLICTNOTROOTNODE - Operation cannot be performed on non-root node of dataset.
- EDSCONFLICTNOTLEAFNODE - Operation cannot be performed on non-leaf node of dataset.
- EDSCONFLICTINVALIDVFILER - Dataset node has members that do not belong to the vFiler being attached.
- EDSCONFLICTDEDUPLICATION - A deduplication schedule enabled provisioning policy cannot be associated with SnapVault destinations.
Inputs
- dataset-access-details => dataset-access-details, optional
Data access details for a dataset that needs to be configured and provisioned in a way that it is capable of transparent migration. This enables all the storage in the primary node of the dataset to be migrated to a different physical resource without having to reconfigure the clients accessing this storage. This input is valid only when: - There are no members in the dataset. - There is no vFiler unit attached to the dataset. - The node being edited is the primary node of the dataset.
- dp-node-name => dp-policy-node-name, optional
Name of the node in the data protection policy that maps to the storage set. dp-node-name must match exactly the name of one of the nodes in the data protection policy that is currently assigned to the data set. If dp-node-name is not specified, then the storage set associated with the root node is modified.
- edit-lock-id => integer
Identifier of the edit lock for a dataset that was obtained by an earlier call to dataset-edit-begin.
- online-migration => boolean, optional
Indicates, that this dataset is capable of non-disruptive migration. This field is valid only when either dataset-access-details or vfiler-id is not empty. By default the migration will be assumed to be disruptive. Default: false.
- provisioning-policy-name-or-id => obj-name-or-id, optional
Name or object identifier of the provisioning policy to be attached to the node. The implications of this are: - Members of the storage set associated with node are checked for conformance with the provisioning policy.
- Any new members provisioned into the storage set will be based on this provisioning policy.
- relinquish-vfiler => boolean, optional
If true, relinquish the vFiler unit associated with the dataset. This will destroy the vFiler unit, move all the storage owned by the vFiler unit to the hosting storage system and re-export the dataset over the storage system. This option can be specified only when: - The vFiler unit associated with the node of the dataset has been created by Provisioning Manager. In this case is-vfiler-created-for-migration will be true in dataset-info returned by dataset-list-info-iter APIs.
- All storage of the vFiler unit belongs to the node of the dataset, except for the root storage.
Default value is false.
- resourcepool-name-or-id => obj-name-or-id, optional
Name or object identifier of a resource pool to assign to the storage set. If other resource pool(s) are already assigned to the storage set, then this one will replace it. This will delete all the attached resource pools and add the new one. If the input value is the empty string "", then no resource pool will be assigned to the storage set i.e. all the attached resource pools will be removed. If this parameter is not supplied, then the storage set's resource pool assignment will not be changed. This is a legacy parameter. dataset-add-resourcepool and dataset-remove-resourcepool are preferred APIs for adding and removing resource pools to/from dataset nodes.
- timezone-name => string, optional
Timezone to assign to the storage set. The value must be either a timezone-name returned by timezone-list-info-iter-next, or the empty string "". If the value is "", no timezone is assigned to the storage set. Storage sets with no timezone assignment use the timezone of the resource pool assigned to them, or the system default if no timezone is assigned to the resource pool.
- vfiler-name-or-id => obj-name-or-id, optional
Name or object identifier of the vFiler unit to be attached to the node. If there are any members currently in the node, they should belong to this vFiler unit. Any new member provisioned for this node, will be exported over this vFiler unit.
Outputs
Provision a new member into the effective primary node of a dataset. The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the dataset root node. The effective primary node of a disaster recovery dataset in the DR state of "failed_over" is the disaster recovery capable node of the dataset.
Error conditions:
- EINVALIDINPUT - If the inputs specified are not valid.
- EEDITSESSIONNOTFOUND - No edit lock was found that has the given identifier.
- EACCESSDENIED - User does not have permission to provision a member.
- EDATABASEERROR - A database error occurred while processing the request.
- EEDITSESSIONCONFLICTINGOP - The requested modification conflicts with other changes performed during the edit session.
- EDATASETNOTINSTABLEDRSTATE - The dataset is not in a stable disaster recovery state.
Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset that was obtained by an earlier call to dataset-edit-begin. Range: [1..2^31-1]
Outputs
Remove datastore objects from a dataset's exclusion list. This API is for removing datastore objects from a dataset's exclusion list.
Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Remove member from a dataset. Only members explicitly added to the dataset (direct members) can be removed. If "destroy" is true, even indirect members can be destroyed on the storage system and removed from the dataset.
If "destroy" is true, then it is applicable only on members of the effective primary node of the dataset.
The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the dataset root node. The effective primary node of a disaster recovery dataset in the DR state of "failed_over" is the disaster recovery capable node of the dataset.
The destroy operation happens in the background.
A provisioning job id is returned in dataset-edit-commit api that represents the job which will resize the member as specified. The status of the job can be checked using dp-job-list-iter ZAPIs with the given job-id.
Inputs
- destroy => boolean, optional
Specifies whether the member should be destroyed on the filer. If not specified or false, the member will not be destroyed on the filer. "destroy" flag can be true, only when the members given in input are of type volume, qtree or lun. Also, when "destroy" is true, the volume, qtree or lun given can be indirect members of the dataset.
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Remove dynamic references from a dataset. Only dynamic_references explicitly added to the dataset can be removed.Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Remove resource pool from a single storage set that is part of a dataset. The storage set is specified implicitly by the name of the policy node that maps to it. Within the same edit session in which you call dataset-remove-resourcepool, you may also add or remove members or dynamic references, or change the dataset's name, description, is-dp-ignored, is-dp-suspended, or is-suspended. You may not change the data set's protection policy or storage sets within the same edit session.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given identifier.
- EACCESSDENIED - User does not have permission to modify the storage set.
- EOBJECTAMBIGUOUS - The name given for the storage set or resource pool matches multiple storage sets or resource pools.
- EOBJECTNOTFOUND - No storage set or resource pool was found that has the given name or identifier.
- EDATABASEERROR - A database error occurred while processing the request.
- ESTORAGESETNOTINDATASET - The specified storage set is not a part of the dataset being edited.
- EDPPOLICYNODENOTFOUND - No DP policy node was found that has the given name.
- EEDITSESSIONCONFLICTINGOP - The requested modification conflicts with other changes performed during the edit session.
- EDSCONFLICTALREADYINDATASET - Object, its ancestor or descendent is already in dataset.
- EDSCONFLICTSRCDESTINSAMEAGGR - Source data and destination data are in the same aggregate.
- EDSCONFLICTALREADYINRESPOOL - Object, its ancestor or descendent is already in resource pool.
Inputs
- dp-node-name => dp-policy-node-name, optional
Name of the node in the data protection policy that maps to the storage set. dp-node-name must match exactly the name of one of the nodes in the data protection policy that is currently assigned to the data set. If dp-node-name is not specified, then the storage set associated with the root node is modified.
- edit-lock-id => integer
Identifier of the edit lock for a dataset that was obtained by an earlier call to dataset-edit-begin. Range: [1..2^31-1]
- resourcepool-name-or-id => obj-name-or-id
Name or object identifier of a resource pool to remove from the storage set.
Outputs
Replace primary members and relationships after a failover. If a secondary volume or qtree is specified, it replaces the primary member for just that secondary volume or qtree. If neither are specified, it replaces all primary members that need to be replaced.
If dry-run is specified, it returns the results of the operation that would be taken should the operation be committed.
Inputs
- cancel-edit-sessions => boolean, optional
If true, any edit operations in progress on the data set are rolled back before replacing any primary members. If it is false and there are edit operations in progress on the dataset, EEDITSESSIONINPROGRESS is returned and no primary members will be replaced. By default, cancel-edit-sessions is false. If dry-run is true, canel-edit-sessions is ignored.
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset whose primary members need to be replaced.
- dry-run => boolean, optional
If true, return a list of actions that would be taken should the operation be committed without actually committing them. Dry run does not affect any edit session in progress. By default, dry-run is false.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of the secondary volume or qtree whose primary needs to be replaced. If object-name-or-id is not specified, it replaces all primary members in the dataset that need to be replaced.
Outputs
Resize, change maximum capacity and change snap reserve for a dataset member on the effective primary node of the dataset. The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the dataset root node. The effective primary of a disaster recovery data set in the DR state of "failed_over" is the disaster recovery capable node of the dataset.
The resize operation happens in the background.
A provisioning job id is returned in dataset-edit-commit zapi that represents the job which will resize the member as specified. The status of the job can be checked using dp-job-list-iter ZAPIs with the given job-id. Error conditions:
- EACCESSDENIED - User does not have the capability DFM.ResourcePool.Provision on the containing resource pool for a volume that is being resized.
- EDATABASEERROR - A database error occurred while processing the request.
- EOBJECTNOTFOUND - No volume or qtree member by the given name or identifier was found.
- ENOTINDATASET - Member is not in the primary node of the specified dataset.
- EOBJECTAMBIGUOUS - The specified member name could refer to two or more objects. Try again with the object identifier or object full name.
- EINVALIDINPUT - Both member-id and member-name are not specified, inputs other than new-size are specified for a qtree member, size parameters are being changed for traditional volumes.
- EEDITSESSIONNOTFOUND - If the edit session specified in input is not found
- EEDITSESSIONOBJECTALREADYLOCKED - If the object being tried to lock in this operation is already locked by some other edit session
- EEDITSESSIONCONFLICTINGOP - If this resize operation is conflicting with some other edit operation in the same edit session
- EDATASETNOTINSTABLEDRSTATE - The dataset is not in a stable disaster recovery state.
Inputs
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
Outputs
Set dataset options.Inputs
- dataset-name-or-id => obj-name-or-id
Name or object identifier for the dataset whose options need to be set.
- is-allow-custom-settings => boolean, optional
If true, conformance check of some volume options is disabled for the dataset. These volume options include, fractional-reserve, autodelete commitment and autodelete trigger attributes only. This option is applicable only for datasets which have san thin provisioning policy associated. For other datasets the option is ignored. default value is false.
- is-enable-write-guarantee-checks => boolean, optional
If true, periodic write guarantee checks are enabled for the dataset. This option is applicable only for datasets which have SAN thin provisioning policy associated. Presence of lun-clones, flex-clones, SFSR operations in the volume may effect write guarantees in SAN thin provisioning configurations. Periodic write guarantee checks detect such condition and generate alerts. For other datasets the option is ignored. default value is true.
- is-none-guaranteed-provision => boolean, optional
If true, the volume provisioned for this dataset will be created with none guarantee, and it will override the default behavior that is provided by a NAS provisioning policy. This option is applicable only for datasets which have NAS thin provisioning policy associated. default value is false.
- is-skip-autosize-conformance => boolean, optional
If true, conformance check of autosize volume options is disabled for the dataset. This option is applicable only for datasets which have thin provisioning policy associated. For other datasets the option is ignored. default value is false.
- maximum-luns-per-volume => integer, optional
Maximum luns to be provisioned from a volume in this dataset. If the value is empty, then the dataset level value of the global option maxLUNsPerVolume is applicable for the dataset.
- maximum-qtrees-per-volume => integer, optional
Maximum qtrees to be provisioned from a volume in this dataset. If the value is empty, then the dataset level value of the global option maxQtreesPerVolume is applicable for the dataset.
Outputs
Change the storage sets associated with policy nodes. It is legal to change many storage set/node mappings in an edit session.Inputs
- dp-node-name => dp-policy-node-name, optional
Name of the data protection policy node to associate the storage set to. If this input is not present the root node is used as the default. If dp-node-name is the root node,
- dataset must have a disaster recovery capable protection policy
- dataset must be in a "failed_over" DR state
- object-name-or-id specified must be the storage set mapped to the DR-capable node of the old protection policy.
Mapping the DR storage set to the root node of a new protection policy will have the side effect of changing the DR state to "ready" when the edit session is committed.
- edit-lock-id => integer
Identifier of the edit lock for a dataset obtained by calling dataset-edit-begin. Range: [1..2^31-1]
- object-name-or-id => obj-name-or-id
Name or identifier of the storage set to add.
Outputs
Ends iteration to list dataset space metric.Inputs
- tag => string
Tag from a previous dataset-space-metric-list-info-iter-start.
Outputs
Get next few records in the iteration started by dataset-space-metric-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous dataset-space-metric-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1]
Starts iteration to list dataset's space usage measurements.Inputs
- day => integer, optional
The day for which the usage metric is required. The month element is required along with this input. When present, the usage metrics for the day is returned. Range: [1..31]
- month => month, optional
The month for which the usage metric is required. If month is not specified, it defaults to the current month. If month element is provided without the day element in input, then the usage metrics available for the month is returned. If month and year element is not provided in input, then the usage measurements for the current month is returned. The start of the current month is determined by the chargebackDayOfMonth global option. Range: [1..12]
- object-name-or-id => obj-name-or-id, optional
Name or identifier of a dataset or resource group or storage service. If a resource group is given, usage measurement of the datasets which are direct members of the group are returned. If a storage service is given, usage measurement of datasets associated with the storage service is returned. If this input is not given, usage measurements of all the datasets are returned.
- year => integer, optional
The year for which the usage metric is required. If year element is not provided in input, then the current year is selected. Range: [1900..2^31-1]
Outputs
- records => integer
Number of records that have been saved for future retrieval with dataset-space-metric-list-info-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to dataset-space-metric-list-info-iter-next.
Update disaster recovery status for a dataset.Inputs
- dataset-name-or-id => obj-name-or-id
Name or object identifier for the dataset.
Outputs
Update protection status for a dataset.Inputs
- dataset-name-or-id => obj-name-or-id
Name or object identifier for the dataset.
Outputs
Retrieve information currently provided by the 'dfm about' command.Inputs
Outputs
- edition => string
Edition of this product. Possible Values are 'Express edition' and 'Standard edition' with product name appened.
- mode => string
DFM operating mode. Possible values are
- 7_mode: DFM Operating for Data ONTAP 7-Mode storage systems only.
- cluster_mode: DFM Operating for Data ONTAP C-Mode storage systems only.
- version => string
A string that adheres to the following regular expression: [1-9][0-9]+\.[1-9][0-9]+.* The first number is the DFM major version. The second number is the DFM minor version. DFM APIs don't change if the major.minor doesn't change. Example: 3.5.0.4726
Disable an existing backup schedule.Inputs
Outputs
Enable an existing backup schedule.Inputs
Outputs
Get the information about a DFM database backup schedule.Inputs
Outputs
- day => integer, optional
The day of the week on which the backup schedule has to run at the specified hour:minute time. This is optional when the schedule-type is 'weekly'. The day starts with Sunday. The days are enumerated with Sunday=0, Monday=1 and so on. Range: [0..6]
- hour => integer, optional
The hour of a day at which the backup schedule runs. This is optional only for the 'snapshot' type backups set at 'hourly' period. Range: [0..23]
- minute => integer
The minute of an hour at which the backup schedule runs. Range: [0..59]
- repeat-interval => integer, optional
The hourly interval at which 'snapshot' based backups recur. This is applicable for 'snapshot' type backups having 'daily' schedule-type. Range: [1..24]
- schedule-type => string
The period at which the backup schedule has runs. Possible values are 'hourly', 'daily' or 'weekly'.
API to configure DFM database backup schedules. A schedule can be of 'snapshot' or 'archive' type backup. Archive backups can be scheduled at 'daily' or 'weekly' period. Snapshot backups can be scheduled at 'hourly', 'daily' or 'weekly' period or everyday at regular intervals starting from a specified time. Hourly backups require the 'minute' at which the backup schedule is to run every hour. Daily backup schedule requires 'hour' and 'minute' at which the backup schedule is to run. Weekly backup schedule requires the day of the week and time (hh:mm) at which the backup schedule is to run. A backup schedule can also be set to run every day at regular hourly repeat intervals starting from a specified hour:minute. Only one schedule can be set for creating DFM backups.Inputs
- day => integer, optional
The day of the week on which the backup schedule has to run at the specified hour:minute time. Required for weekly schedule type. Not allowed for hourly and daily schedule type. The day starts with Sunday. The days are enumerated with Sunday=0, Monday=1 and so on. Range: [0..6]
- hour => integer, optional
The hour of a day at which the backup schedule has to run. Required for daily or weekly schedule type. Not allowed for hourly schedule type. Range: [0..23]
- repeat-interval => integer, optional
The hourly interval at which 'snapshot' based backups have to recur starting at hour:minute. This is applicable only for 'snapshot' type backups. Range: [1..24]
- schedule-type => string
The period at which the backup schedule has to run. Possible values are 'hourly', 'daily', 'weekly'.
Outputs
API to start a database backup immediately.Inputs
- backup-type => backup-type
Type of DFM database backup to initiate a backup.
Outputs
Get information about a DFM database backup.Inputs
Outputs
- backup-status => string
Status of the database backup. Possible values are: 'running': when backup job is running; applicable for both immediate and scheduled backups. 'pending': when backup job is scheduled, but yet to start; applicable only for immediate backup started with dfm-backup-start. 'schedule_active': backup schedule is set and enabled; applicable for scheduled backup only. 'schedule_inactive': backup schedule is set and disabled; applicable for scheduled backup only. 'not_scheduled': when a schedule is not set; applicable for scheduled backup only.
Retrieve information about the API call frequencies on the DFM serverInputs
Outputs
Gets the list of resource properties and the values that can be set as filters for thresholds. The list of resource properties are pre-defined, but the values are obtained from the current set of values in the databaseInputs
- resource-property => string, optional
Specifies the property for which the values are to be returned If not present, then resource values for all properties are returned Maximum length of 255 characters
Outputs
Returns the monitoring timestamps of a host.Inputs
- host-name-or-id => obj-name-or-id
The name or id of the host for which monitoring timestamp is required. It should be name or id of a Host Agent, vFiler Unit, Storage System, Cluster or OSSV Host.
Outputs
Request monitors be scheduled to run to refresh the information of the object specified. The monitors to be scheduled to run can be specified implicitly using child-type or explicitly by providing monitor-names. If both child-type and monitor-names are specified, it will be treated as an error.Inputs
- child-type => string, optional
If specified, schedule only those monitors affecting the specified type. Otherwise, all monitors affecting the object-name-or-id will be scheduled. Valid only if object-name-or-id points to a Storage System or a vFiler unit. Possible values: "aggregate", "volume", "qtree" and "lun_path".
- monitor-names => monitor-name[], optional
Specifies one or more monitors to be scheduled to run. If this input is not provided, all monitors will be scheduled. Valid only if object-name-or-id points to a Host Agent, Storage System, vFiler unit, OSSV Host or a NetCache.
- object-name-or-id => obj-name-or-id
The name or id of the object to be refreshed. Should be name or id of a Host Agent, Storage System, vFiler unit, OSSV Host, Aggregate, Volume, Qtree, LUN or a NetCache. If child-type is specified, this should be name or id of a Storage System or a vFiler unit. If monitor-names are specified, this should be name or id of a Host Agent, Storage System, vFiler unit, OSSV Host or a NetCache.
Outputs
Get status for DFM objects This api always returns true. It returns the status for all the objects that are passed in. A object-name-or-id of "0" indicates the "global group" Privelege required is readInputs
Outputs
Retrieve information about objects related to a DFM object. This api takes an object as input and returns the information about parent objects of that object, resource groups, datasets and resource pools the object belongs to and objects that belong to the specified object. Privilege required is DFM.Database.Read on the specified object. Parent output objects are returned only if the authenticated user has DFM.Database.Read privilege on that parent object. For e.g. group to which an object belongs is returned only if the authenticated user has DFM.Database.Read privilege on that group.Inputs
- include-indirect => boolean, optional
If true, resource groups, datasets and resource pools in which the input object is an indirect member will also be returned. Default value is false, i.e only direct memberships are returned.
- object-name-or-id => obj-name-or-id
Name or identifier of an object to list related objects for. The allowed object types for this argument are: - Host
- Aggregate
- Volume
- Qtree
- Lun
Outputs
Retrieve server diagnostic informationInputs
Outputs
Add a SNMP credential setttings for a host or network in dfm. This credential will be used when discovering networks in dfm.Inputs
- host-name => string, optional
The name of the host. This could be either short or fully qualified host name. Either this attribute or network-address must be provided. Error is returned if both are provided.
- prefix-length => prefix-length
It represents prefix length. Useful for subnet mask calculation. This should be 32 for IPv4 and 128 for IPv6 if the network-address is a host address.
Outputs
Delete networks from dfm so that discovery should not happen. Either snmp-id or both network-address and prefix-length has to be provided. If the user provides both or does not provide any, it will be considered as invalid input.Inputs
- host-name => string, optional
Name of the host. It could be either short or fully qualified host name.
- network-address => network-address, optional
IP address of the network or host.
- snmp-id => snmp-id, optional
Unique indentifier of the SNMP credential setting in dfm. This id will be generated when SNMP credential setting was added.
Outputs
Modify SNMP credential in dfm so that network discovery should happen using new SNMP credential.Inputs
- prefix-length => prefix-length, optional
It is prefix length of network or host, useful for subnet mask calculation. This cannot be modified for hosts.
- privacy-password => string, optional, encrypted
Privacy password used for encrypting SNMPv3 communication. This element is optional for the SNMP versions other than snmp_v3.
- snmp-authentication-login => string, optional
The name of the user to be used for SNMPv3 discovery and monitoring. This element is optional for the SNMP versions other than snmp_v3.
- snmp-authentication-password => string, optional, encrypted
The authentication password to be used for SNMPv3 communication. Value must be a string greater than or equal to 8 characters. This element is optional for the SNMP versions other than snmp_v3.
- snmp-authentication-protocol => snmp-authentication-protocol, optional
Authentication protocol for use with SNMPv3. This element is optional for the SNMP versions other than snmp_v3.
- snmp-community-name => string, optional
Name of the snmp community used for snmpv1 communication. This element is optional for the SNMP versions other than snmp_v1.
- snmp-id => snmp-id
Unique indentifier of the SNMP credential setting in dfm.
- snmp-version => snmp-version, optional
Represents snmp version that will be used for discovering network.
Outputs
Returns list of SNMP credential settings available in the dfm database Either snmp-id or both network-address and prefix-length has to be provided. If user provides both, it will be considered as invalid input. If user does not provide any, then all the snmp credentials will be returned.Inputs
- host-name => string, optional
The name of the host. This could be either short or fully qualified host name. This attribute and network-address must not be provided simultaneously. Error is returned if both are provided.
- prefix-length => integer, optional
It represents prefix length. Useful for subnet mask calculation. This should be 32 for IPv4 and 128 for IPv6 if the network-address is a host address.
- snmp-id => snmp-id, optional
A unique id representing the SNMP credential setting.
Outputs
Retrieve current user's global privilege. This api is no longer the preferred way to getting user privileges. Use rbac-access-check.Inputs
Outputs
- privilege => string
User's privilege(s), comma separated if there are multiple. Possible privileges are: FULL, DELETE, WRITE, READ, BACKUP, RESTORE, SAN, SRM, MIRROR.
Get the content of a given schedule.Inputs
Outputs
Create a new schedule with the given name. The schedule type may be daily, weekly, or monthly.Inputs
Outputs
Create a single schedule within a daily schedule.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Delete a single schedule within a daily schedule.Inputs
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a daily schedule
- item-id => integer
An ID of the daily item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Modify a single schedule within a daily schedule. Sample schedules cannot be modified.Inputs
- daily-content => daily-info
Content of the daily schedule.
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a daily schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Return a list of other DP policies, report schedule and schedules using the specified schedule.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- schedule-name-or-id => obj-name-or-id
Name or identifier of a schedule.
Outputs
- schedule-assignees => schedule-assignee[]
List of other DP policies, report schedules and schedules using the specified schedule. The list returned will depend on the schedule category. For schedule catedory 'dfm_schedule' only report schedules and the DFM schedules using this schedule will be returned. For schedule category 'dp_schedule' only DP policies and the DP schedules using this schedule will be returned. The list excludes DP policies, report schedules or schedules that the caller has no permissions to read.
- schedule-in-use => boolean
For schedule category 'dfm_schedule' this is true if its used by report schedules or DFM schedules. For schedule category 'dp_schedule' this is true if its used by DP policies or DP schedules.
Delete a schedule with the given name or ID. A schedule that is used by another schedule(s) may not be deleted and an error will be returned. Sample schedules cannot be destroyed.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- schedule-name-or-id => obj-name-or-id
Name or ID of the schedule
Outputs
Create an hourly schedule within a daily schedule. An hourly schedule specifies the frequency of schedules to be run within the start time and end time.Inputs
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a daily schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
- item-id => integer
An ID of the hourly item within the schedule
Delete an hourly schedule within a daily schedule.Inputs
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a daily schedule
- item-id => integer
An ID of the hourly item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Modify an hourly schedule within a daily schedule. Sample schedules cannot be modified.Inputs
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a daily schedule
- hourly-content => hourly-info
Content of the hourly schedule.
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
List all existing schedule IDs and types.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Tell the DFM station that the temporary store associated with the specified tag is no longer necessaryInputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Iterate over the list of schedules stored in the temporary store. The DFM internally maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.Inputs
- maximum => integer
Maximum number of schedules to retrieve
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned.
The dfm-schedule-list-info-iter-* set of APIs are used to retrieve a list of schedule contentsInputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- schedule-name-or-id => obj-name-or-id, optional
Name or ID of the schedule. If specified, only this schedule is listed.
Outputs
- records => integer
Number of items saved for future retrieval
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Modify a schedule's details in the database. When the zapi is called, all details within the schedule will be removed and replace by the new details specified in schedule-content. Sample schedules cannot be modified. schedule-id and schedule-type cannot be modified.Inputs
Outputs
Specify a single schedule within a monthly schedule. Either day-of-month, or both week-of-month and day-of-week must be specified.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
- item-id => integer
An ID of the monthly item within the schedule
Delete a single schedule within a monthly scheduleInputs
- item-id => integer
An ID of the monthly item within the schedule. Range: [1..(2^31)-1]
- monthly-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Modify a single schedule within a monthly schedule. Sample schedules cannot be modified.Inputs
- monthly-content => monthly-info
Content of the monthly schedule.
- monthly-schedule-name-or-id => obj-name-or-id
Name or ID of a monthly schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Specify a sub-schedule to be used by a monthly schedule. On top of the individual monthly events, a monthly schedule may only have 1 daily subschedule OR 1 weekly schedule. If this monthly schedule already has a daily or weekly schedule, this command replaces the old one.Inputs
- monthly-schedule-name-or-id => obj-name-or-id
Name or ID of the monthly schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Unset a sub-schedule used by a monthly schedule.Inputs
- monthly-schedule-name-or-id => obj-name-or-id
Name or ID of the monthly schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
Rename a schedule. Sample schedules cannot be renamed.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- schedule-name-or-id => obj-name-or-id
Name or ID of the schedule
Outputs
Specify a single schedule within a weekly schedule.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
Outputs
- item-id => integer
An ID of the weekly item within the schedule
Delete a single schedule within a weekly schedule.Inputs
- item-id => integer
An ID of the weekly item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- weekly-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
Outputs
Modify a single schedule within a weekly schedule. Sample schedules canot be modified.Inputs
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- weekly-content => weekly-info
Content of the weekly schedule.
- weekly-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
Outputs
Specify which daily schedule will be used on a certain range of days within a weekly scheduleInputs
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a daily schedule
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- weekly-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
Outputs
- item-id => integer
An ID of the weekly use item within the schedule. Range: [1..(2^31)-1]
Specify which daily schedule to be deleted within a weekly scheduleInputs
- item-id => integer
An ID of the weekly use item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- weekly-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
Outputs
Specify which daily schedule will be used on a certain range of days within a weekly schedule. Permenent of sample schedules cannot be modified.Inputs
- daily-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
- end-day-of-week => integer
Day of week for the schedule. Range: [0..6] (0 = "Sun")
- item-id => integer
An ID of the weekly use item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'.
- start-day-of-week => integer
Day of week for the schedule. Range: [0..6] (0 = "Sun")
- weekly-schedule-name-or-id => obj-name-or-id
Name or ID of a weekly schedule
Outputs
The disk-list-info-iter-* set of APIs are used to retrieve the list of disks. disk-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the disk-list-info-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check disk-list-info-iter-start. The disk-list-info-iter-next API is used to iterate over the members of the disks stored in the temporary store created by the disk-list-info-iter-start API.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous disk-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by disk-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
The disk-list-info-iter-* set of APIs are used to retrieve the list of disks in DFM. disk-list-info-iter-start returns the disk belonging to objects specified. It loads the list of disks into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the disks in the temporary store. If disk-list-info-iter-start is invoked twice, then two distinct temporary stores are created.Inputs
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of an object to list disks for. The allowed object types for this argument are: - Resource Group
- Host
- Aggregate
Disks are not objects in DFM. So, a single disk cannot be listed. If object-name-or-id is specified, all disks that belong to the object specified will be listed. If no object-name-or-id is provided, all disks will be listed.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with disk-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to disk-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Ends iteration to list file contents of a backup.Inputs
- tag => string
Tag from a previous dp-backup-content-list-iter-start.
Outputs
Get next few records in the iteration started by dp-backup-content-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1].
- tag => string
Tag from a previous dp-backup-content-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Starts iteration to list contents of a backup. Files and directories directly under specified path are listed. In order to list all files recursively, multiple invocations of this API are necessary.Inputs
- backup-id => integer
Id of backup instance whose file contents are to be listed. Use dp-backup-list-iter-start to retrieve valid backup ids. Range: [1..2^31-1]
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data originated. This field is ignored if the root-object-name-or-id is omitted. It is also ignored, if primary-snapshot-unique-id is specified.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data originated. Currently, this is the Snapshot copy's creation time. This field is ignored if the root-object-name-or-id is omitted.
- root-object-name-or-id => string, optional
Name or id of a dataset member that is at the root of the file tree. This may be qtree, ossv directory or volume that is the source of the physical data protection relationship. Indirect members may also be specified. If the sub path input is given, then this input is required. If specified, backup contents of this object are listed. Otherwise, backup contents directly under the dataset are listed.
- sub-path => string, optional
Local subpath within the dataset with regard to root-object-name-or-id. If specified, backup contents directly under this sub path are listed. Otherwise, backup contents directly under the specified root-object are listed. Ignored if root-object-name-or-id input is not present If the root-object-name-or-id is a volume, qtree or ossv unix directory then this sub-path is a unix-like path, ex: 'dir1/dir2/'. If the root-object-name-or-id is ossv windows directory then this sub-path is a windows path, ex : 'dir1\\dir2\\'. Maximum length of sub-path is 32767 characters.
- suppress-child-count => boolean, optional
This flag is set to FALSE by default. When TRUE, the child count for a file will not be calculated and a zero will be returned in its place in the file_info structer.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-backup-content-list-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-backup-content-list-iter-next.
Given a backup-id or the backup-version and a list of member-ids or member-names, returns the snapshot name, volume that it exists on as well as the secondary qtree associated with the member id. If more than one snapshots match the backup-d or backup-version, only one, the first item in the resulting list, is returned. Multiple matches occur when the same backup version exists on multiple nodes. In case of mutiple matches which snapshot gets picked is unspecified.Inputs
- dataset-name-or-id => obj-name-or-id, optional
Name or ID of the dataset to which this backup version belongs to. This parameter is ignored in case of backup-id parameter in the input.
Outputs
Ends iteration to list backups.Inputs
- tag => string
Tag from a previous dp-backup-list-iter-start.
Outputs
Get next few records in the iteration started by dp-backup-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous dp-backup-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1]
Starts iteration to list backups of a dataset or a path within dataset.Inputs
- application-resource-namespace => application-resource-namespace, optional
Name of the application resource namespace. If has-application-resources-only is true, then (has-application-resources-only, application-resource-namespace) is used to load backups, specific to a resource namespace. Ignored if has-application-resources-only is false. If not specified, when has-application-resource-namespace is true, then all backups containing application resources are returned.
- backup-id => integer, optional
Id of backup instance whose information is to be listed. If this parameter is specified backup-version, volume-id, snapshot-unique-id, root-object-name-or-id, sub-path, application-resource-namespace and search-keys parameters are ignored. Range: [1..2^31-1]
- backup-version => dp-timestamp, optional
Timestamp when the backup was taken. Backups of the dataset that were taken at this time is listed. If this parameter is specified, volume-id, snapshot-unique-id, root-object-name-or-id, sub-path, application-resource-namespace and search-keys are ignored.
- dataset-name-or-id => obj-name-or-id, optional
Name or id of dataset. Backups for this dataset are listed. If this is not given, then all backups for all datasets will be returned, the backup-id, backup-version, volume-id fields are ignored.
- include-is-available => boolean, optional
If true, the is-available status is calculated for each member which may make the call to this zapi take much longer. Default is false.
- include-metadata => boolean, optional
If true, returns metadata for the backup. If false, metadata, which can be large in size, is not returned. Default is false. Ignored if include-metadata-host-service is specified.
- is-mounted => boolean, optional
If true, return backups that are either mounted, being mounted or being unmounted. If false, this input will not be used as filtering criterion while listing backups. Ignored if either backup-id, backup-version, or search-keys are specified. Default is false.
- is-root-object-restorable => boolean, optional
If root-object-name-or-id is given, and this is TRUE, only the backups from which the root-object is restorable will be returned. Default is FALSE. This is ignored if root-object-name-or-id is not given.
- root-object-name-or-id => obj-name-or-id, optional
Name or id of a dataset member. This may be either be an application object such as VM or a datastore or it can be a qtree, ossv directory or volume. If sub path input is given then this input is required. However, sub path input cannot be specified if this input represents an application object. If specified, backups containing this object are listed. Otherwise, all backups of this dataset are listed. If dataset-name-or-id is not specified, all backups containing this root object across all datasets will be listed.
- search-keys => search-key[], optional
Keys to search the backups by. If any of the given search keys matchs a backup, the backup will be returned. If backup-id, backup-version, volume-id, snapshot-unique-id, root-object-name-or-id, or sub-path is specified, this field is ignored. If dataset-name-or-id is not specified, backups are searched across all the datasets, otherwise search is limited to only the specified dataset.
- sub-path => string, optional
Local subpath within the dataset with regard to root-object-name-or-id. If specified, backups containing this sub path are listed. Ignored if root-object-name-or-id input is not present or if root-object-name-or-id represents an application object such as a VM or a datastore. If the root-object-name-or-id is a volume, qtree or ossv unix directory then this sub-path is a unix-like path, ex: 'dir1/dir2/'. If the root-object-name-or-id is ossv windows directory then this sub-path is a windows path, ex : 'dir1\dir2\'. Maximum length of sub-path is 32767 characters.
- volume-id => obj-id, optional
Id of volume that is member of a dataset node. If this input is present then snapshot-unique-id must be present. backup-version containing this (volume-id, snapshot-unique-id) is returned. If this parameter is specified, root-object-name-or-id, sub-path, application-resource-namespace and search-keys are ignored.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-backup-list-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-backup-list-iter-next.
Mount a backup containing virtual infrastructure objects such as VMs or datastores. A background job will be started for this operation.Inputs
- backup-id => integer
Identifier of the backup that is to be mounted. Range: [1..2^31-1]
- host-name-or-id => obj-name-or-id
Name of the host on which the backup is mounted.
Outputs
- job-id => integer
Id of the job that handles the mounting operation. All host services progress/status information will be logged in this job. Range: [1..2^31-1]
- mount-id => integer
Id of the mount session. This can be used to unmount the backup. Range: [1..2^31-1]
Start an unscheduled backup of a dataset. All members or a subset of the dataset will be backed up. A background job will be spawned to backup the dataset.Inputs
- backup-description => string, optional
Description for the backup. Can be arbitrary string meaningful to the user. If provided, the job spawned will have this description. The length of this string cannot be more than 255 characters.
- backup-destination => string, optional
This field is deprecated in favor of backup-destination-name-or-id. Destination node name of the backup. This legacy parameter is ignored if backup-destination-name-or-id is supplied. Please see input backup-destination-name-or-id for the description of how this node name will be used.
- backup-destination-name-or-id => string, optional
Destination of the backup. For a dataset with an associated data protection policy, this may be the name or id of one of the nodes in the data protection policy associated with the dataset directly or through the storage service.
If the name or id of the primary node is specified, the management station takes local snapshot only. Otherwise, the management station transfers the backup over intervening policy nodes until it reaches the destination with this node name or id.
If no node name or id is provided, and there is an associated data protection policy for the dataset, the management station takes local snapshots on the primary node as well as backups on all connections in the policy.
A value of 1 is used for the root node of the dataset.
If both backup-destination and backup-destination-name-or-id are specified, then backup-destination-name-or-id will take precedence over backup-destination.
- backup-script => string, optional
Name of the script to invoke on the Host Services station both before and after the backup. This script is only used for datasets containing application objects such as Hyper-V VMs or VMware VMs and if supplied on other datasets will generate an error of EUNUSEDAPPLICATIONOBJECTSETTING. The backup-script consists of 0 to 255 characters. An empty string value "" indicates no script is invoked. The default value of this property is the empty string "".
The system does not check whether a non-empty path string actually refers to an executable script prior to attempting to run the script. The default value of this property is the empty string "". For example, possible values are %env%\scripts\backup.ps1 OR c:\program..\HS\scripts\backup.ps1 OR k:\program..\HS\scripts\backup.ps1 [k is a network share] OR UNC path \\SCRIPTSSVR\SHARE\SCRIPTS\BACKUP.PS
- dataset-name-or-id => obj-name-or-id
Name or id of the dataset to backup.
- retention-duration => integer, optional
The age, in seconds, after which this backup expires. When this input is not present for datasets that are associated with a data protection policy, the retention-duration for this one backup will be determined by the data protection policy.
When retention-type is omitted the the value of retention-duration will determine the retention-type as shown below.
- 'hourly' if less than 24 hours
- 'daily' if 1 day up to and not including 1 week
- 'weekly' if 1 week up to and not including 31 days
- 'monthly' if 31 days or longer.
At least one of retention-type or retention-duration is required unless the dataset is an application dataset with is-application-responsible-for-primary-backup set to true. In the latter case, both can be omitted.
If present, it should be a non-zero value. Range: [1..2^31 - 1].
- retention-type => dp-backup-retention-type, optional
Retention type to which the backup version should be archived. At least one of retention-type or retention-duration is required unless the dataset is an application dataset with is-application-responsible-for-primary-backup set to true. In the latter case, both can be omitted.
If this input is specified, the specified retention type will be assigned to the backups created by this operation. This applies to all types of datasets including the application datasets with is-application-responsible-for-primary-backup set to true.
If this input is omitted for an application dataset and if is-application-responsible-for-primary-backup is true, the retention type of the primary backup is assigned to the replicated backups.
If this input is omitted in any of the other cases, retention type will be determined from retention-duration as specified in the retention-duration definition that follows.
Outputs
- job-id => integer
Id of the job that handles the unscheduled backup of the dataset. dp-job-progress-event-list-iter-* apis can be used to track the progress of the backup job.
Range: [1..2^31-1]
Unmount a mounted backup. A background job is started for this operation.Inputs
- mount-id => integer
Id of the backup mount session. Range: [1..2^31-1]
Outputs
- job-id => integer
Id of the job that handles the backup unmounting operation. All the progress/status information will be logged in this job. Range: [1..2^31-1]
Creates a backup version. Backup version is collection of volume snapshots and denotes a single backed up image of a dataset. The management station keeps track of actual volumes that hold the dataset backup using backup versions.Inputs
- backup-description => string, optional
Description for backup version. Can be arbitrary string meaningful to the user. If provided, the new version will have this description. The length of this string cannot be more than 255 characters.
- dataset-name-or-id => obj-name-or-id, optional
Name or identifier of the dataset for the backup version.
- is-adding-members => boolean, optional
If this input is true, Protection Manager expects more members to be added to the version-members element of this backup. The job that is transferring this backup, periodically checks to see if the new members have been added and starts transferring them. The transfer job does not exit until this input is set to false by calling dp-backup-version-modify API (or until the timeout occurs). If this input is false, the job exits after transferring any members that it could find in this backup. Any new members that got added to the backup will be transferred by the next job.
This input can be used when creating a local backup potentially takes a very long time and you want the Protection Manager to start the transfers without waiting for the local backup to complete.
The default value is false.
- is-for-propagation => boolean, optional
Indicates whether or not this backup version should be propagated according to the data protection policy. If false, the backup version will not be propagated to other nodes. Once a backup version is created, this property can not be modified. Default value is true.
- is-native => boolean, optional
Indicates whether or not this backup version would have been created on the same storage set if dataset had not failed over. If is-native is true, this backup version is retained according to retention settings on the node on which the backup version exists.
If is-native is false, this backup version is retained according to retention settings on the DR primary node.
Default value is true.
When dataset has failed over to the DR secondary node, local backups are created on the DR secondary node. But these backups are retained using retention settings on the DR primary node. Server uses is-native in conjunction with retention-type to determine which retention settings to use.
Examples:
- If is-native is true and retention-type is hourly, hourly-retention-duration and hourly-retention-count on the same node, where the backup exists, are used.
- If is-native is false and retention-type is daily, daily-retention-count and daily-retention-duration on the DR primary node are used.
- job-id => integer, optional
Id of the data protection job that is invoking this API. This parameter is intended for internal use only.
- retention-type => string, optional
Retention type that will be applied to the backup being created on the requested node. If it is not provided, the retention-type for the backup being created on the requested node will be set to 'unlimited'. Possible values: 'hourly','daily', 'weekly', 'monthly' and 'unlimited'.
- storageset-id => obj-id, optional
Identifier of storage set on which this backup version exists. If the storage set is not specified, then the default is the primary storage set.
- version-timestamp => dp-timestamp
Timestamp of the backup version. This corresponds to the time at which backup for the dataset started. This timestamp will be used as the identifier of this backup version and will be accepted and returned in the "backup-version" element of various APIs.
Outputs
- backup-id => integer
Integer identifier of the new backup instance. Range: [1..2^31-1] This is different from backup-version.
Delete an existing backup version from the database and delete the snapshots from the storage systems for this backup version. The backup version must currently exist and must not be mounted. This API does not provide transaction semantics. When API returns an error, the backup version may have been partially deleted.
Error conditions: - EBACKUPDOESNOTEXIST - The backup version was not found.
- EBACKUPBUSY - The backup version is busy. This can happen if one of the Snapshots in the backup is needed for future transfers or if transfer from the backup is in progress.
- EBACKUP_ALREADY_MOUNTED - The backup version is either being mounted or has already been successfully mounted.
- EINVALIDINPUT - Not enough inputs to determine version properly. This can happen if less than three of dataset, backup-version, and node-name are used in the case where backup-id is not supplied.
- EACCESSDENIED - Access was denied on the requested backup version.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EOBJECTNOTFOUND - Dataset not found.
Inputs
- allow-deferred-delete => boolean, optional
If this input is true and if the backup cannot be deleted immediately, it will be marked for deletion at a later time and this API will return success. The backup will be deleted when the conditions that were preventing the backup from deletion no longer exist. For example, if the backup contains Snapshot copies needed by SnapVault or SnapMirror for future transfers, the backup will be deleted when those Snapshot copies are no longer needed. If this input is true and if the backup is mounted, it will not be marked for deletion and an error EBACKUP_ALREADY_MOUNTED will be returned. The backups which are marked for deletion will not be returned via any of the backup listing APIs.
If this input is false and if the backup cannot be deleted immediately, an error will be returned.
Default is false.
- backup-id => integer, optional
Identifier of the backup instance to be deleted. Range: [1..2^31-1] This input is required unless both dataset-name-or-id and backup-version inputs are supplied.
- backup-version => dp-timestamp, optional
Timestamp when the backup was taken. This input is required if backup-id is not present. This input should be omitted and is ignored if backup-id is present.
- dataset-name-or-id => obj-name-or-id, optional
Name or identifier of the dataset for the backup version to be deleted. This input is required if backup-id is not present. This input should be omitted and is ignored if backup-id is present.
- delete-multiple-backups => boolean, optional
This input is ignored if backup-id input is present or if all three of the dataset-name-or-id, backup-version and node-name are present since in both cases a single backup is identified for deletion. When both backup-id and node-name are omitted and both dataset-name-or-id and backup-version are supplied, multiple backups may produce a match. In that case, this input controls whether multiple backups that match the specified dataset-name-or-id and backup-version inputs will be deleted.
If this input is true, all the matching backups will be deleted. The new callers of this API who omit both node-name and backup-id inputs are expected to use this value.
If this input is false only a single backup among several matches will be deleted. API does not specify which particular backup will be picked for deletion. This behavior exists for compatibility reasons only. New callers of this API should not specify 'false' value for this input.
Default is false.
- node-name => string, optional
Name of policy node that uniquely defines the backup-id for backup version to be deleted. This input should be omitted and is ignored if backup-id is present.
Outputs
Ends iteration to list backup versions.Inputs
- tag => string
Tag from a previous dp-backup-version-list-iter-start.
Outputs
Get next few records in the iteration started by dp-backup-version-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous dp-backup-version-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1]
Starts iteration to list all backup versions for a given dataset. Information returned includes the IDs of each instance, the propagation status for each version, and job Id responsible for the backup. Clients should use this API if they want a list of backup versions rather than backup instances.Inputs
- backup-version => dp-timestamp, optional
Timestamp when the backup was taken. Backups of the dataset that were taken at this time are listed. If this parameter is specified, volume-id, snapshot-unique-id, root-object-name-or-id and sub-path are ignored.
- dataset-name-or-id => obj-name-or-id
Name or id of dataset. Backups for this dataset are listed.
- include-is-available => boolean, optional
If true, the is-available status is calculated for each member which may make the call to this zapi take much longer. Default is false.
- include-metadata => boolean, optional
If true, returns metadata for the backup. If false, metadata, which can be large in size, is not returned. Default is false.
- root-object-name-or-id => obj-name-or-id, optional
Name or id of a dataset member that is at the root of the file tree. This may be a qtree, ossv directory or volume that is the source of the physical data protection relationship. If sub path input is given then this input is required. If specified, only backups containing this object are listed. Otherwise, all backups of this dataset are listed.
- snapshot-unique-id => string, optional
Unique id of a snapshot. This is currently the snapshot create time. If volume-id is present, then this input must be given. Ignored if volume-id is not present.
- sub-path => string, optional
Local subpath within the dataset with regard to root-object-name-or-id. If specified, backups containing this sub path are listed. Ignored if root-object-name-or-id input is not present If the root-object-name-or-id is a volume, qtree or ossv unix directory then this sub-path is a unix-like path, ex: 'dir1/dir2/'. If the root-object-name-or-id is ossv windows directory then this sub-path is a windows path, ex : 'dir1\dir2\'. Maximum length of sub-path is 32767 characters.
- volume-id => obj-id, optional
Id of volume that is a member of a dataset node. If this input is present then snapshot-unique-id must be present. backup-version containing this (volume-id, snapshot-unique-id) is returned. If this parameter is specified, root-object-name-or-id, sub-path are ignored.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-backup-version-list-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-backup-version-list-iter-next.
Modifies a backup version. Either the backup-id or some combination of the dataset-name-or-id, node-name-or-id and backup-version are used to specify an individual backup instance or a set of backup instances which represent the same backup version. Certain inputs, such as is-adding-members, dp-backup-transfer-info and version-members, can only be applied to a single backup instance. When a backup instance is transferred from one node to another, the attributes of the backup instance, backup-description and backup-metadata, are copied at the beginning of the transfer. Any changes to these fields made after a backup instance has been copied will not be propagated automatically. Specify only the dataset-name-or-id and backup-version to update the fields of all the backup instances with the same backup-version.Inputs
- backup-description => string, optional
Description for backup version. Can be arbitrary string meaningful to the user. The length of this string cannot be more than 255 characters.
- backup-id => integer, optional
Identifier of the backup instance to be modified. Range: [1..2^31-1] Either the backup-id or both the dataset-name-or-id and backup-version must be specified.
- backup-metadata => dfm-metadata[], optional
Opaque metadata for this backup. Metadata is usually set and interpreted by an application that is using the dataset. DFM does not look into the contents of the metadata.
- backup-version => dp-timestamp, optional
Timestamp when the backup was taken. Backups of same dataset at different locations have same version if their contents are identical. The management station keeps track of which backups have identical contents and assigns same version to them. Either the backup-id or both the dataset-name-or-id and backup-version must be specified. Ignored if backup-id is specified.
- dataset-name-or-id => obj-name-or-id, optional
Name or identifier of the dataset for the backup version. Either the backup-id or both the dataset-name-or-id and backup-version must be specified. Ignored if backup-id is specified.
- dp-backup-transfer-info => dp-backup-transfer-info, optional
The transfer status of the backup to a destination node. If this input is specified, either the backup-id or all three of the dataset-name-or-id, node-name-or-id, and backup-version must be specified. This input is for the internal use of the Protection Manager. Protection Manager job can modify backup-transfer-status, backup-transfer-needed or job-id elements of dp-backup-transfer-info.
- is-adding-members => boolean, optional
Indicates whether more members are being added to the version-members element of this backup. If this input is true, Protection Manager expects more members to be added to the version-members element of this backup. The job that is transferring this backup, periodically checks to see if the new members have been added and starts transferring them. The transfer job does not exit until this input is set to false by calling dp-backup-version-modify API (or until the timeout occurs).
If this input is false, the job exits after transferring any members that it could find in this backup. Any new members that got added to the backup will be transferred by the next job.
This input can be used when creating a local backup potentially takes a very long time and you want the Protection Manager to start the transfers without waiting for the local backup to complete.
- is-complete => boolean, optional
Indicates whether or not this backup version is a consistent backup of the dataset. If not specified, the value of the attribute remains unchanged.
- node-name-or-id => string, optional
Name or id of policy node that uniquely defines the backup-id for backup version to be modified. If version-members or dp-backup-transfer-info input is specified, either the backup-id or all three of the dataset-name-or-id, node-name-or-id, and backup-version must be specified.
If version-members and backup-transfer-info inputs are not specified, omitting node-name-or-id and backup-id inputs is acceptable as long as both dataset-name-or-id and backup-version are specified. In such case, the setting change will be applied to all backups with the specified backup-version.
If node-name-or-id is 1, it is interpreted as root node. This is true even when there is no data protection policy attachted to the dataset.
This input should be omitted and is ignored if backup-id is present.
- retention-type => string, optional
Retention type of this version. Possible values: 'hourly','daily', 'weekly', 'monthly' and 'unlimited'.
- version-members => version-member-info[], optional
Snapshots to add to the backup. If a snapshot is already a member of the backup, do nothing. Existing members of the backup are unaffected. If this input is specified, either the backup-id or all three of the dataset-name-or-id, node-name-or-id, and backup-version must be specified.
Outputs
Returns a set of jobs to be spawned to backup the dataset. This zapi is used to obtain data that is later used to start an on-demand backup of dataset.Inputs
- backup-destination => string, optional
Destination of the dataset backup. This should be name of one of the nodes in data protection policy attached to the dataset. If primary node is specified, only local snapshot is taken. Otherwise, backup is transferred over intervening policy nodes until it reaches the specified node. If this element is not present, then local snapshot on the primary node as well as backup and mirror transfers on all connections of the policy are done.
- dataset-name-or-id => obj-name-or-id
Name or id of the dataset to be backed up.
- retention-type => dp-backup-retention-type, optional
Retention type to which the backup version should be archived. This input may be required depending on the type of the dataset. See retention-type field in dp-backup-start for more information.
Outputs
Return number of disaster recovery enabled datasets in all dr-state and dr-status combinations. The number of datasets in each distinct dr-state/dr-status combination is returned.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDATABASEERROR - A database-related error occurred while processing the request. Try again later.
Inputs
Outputs
Get a list of the most lagged datasets. This API returns a list of some or all datasets, sorted by lag time. Only the datasets name, ID, and worst relationship lag time are returned.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDATABASEERROR - A database-related error occurred while processing the request. Try again later.
Inputs
- group-name-or-id => group-name-or-id, optional
Name or identifier of a group. Only relationships inside the specified group will be returned. If no value is specified, then relationships are returned regardless of group.
Outputs
Get a list of the most lagged relationships. This API returns a list of some or all relationships, sorted by lag time. Only the relationship name, ID, and lag time are returned.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDATABASEERROR - A database-related error occurred while processing the request. Try again later.
Inputs
- group-name-or-id => group-name-or-id, optional
Name or identifier of a group. Only relationships inside the specified group will be returned. If no value is specified, then relationships are returned regardless of group.
Outputs
This zapi call has been deprecated in Juno (4.0) release, because the definitions for "protected" and "unprotected" objects changed. Please use dp-dashboard-get-protected-data-counts-2 instead. Get counts for certain types of objects for displaying on the Protection Manager Dashboard. The types of objects are: datasets, volumes, qtrees, and OSSV directories. For each object type, the number of protected objects, unprotected objects, and ignored objects is returned. And object is considered to be protected if it is a member of a dataset, and a dataset is considered to be protected if it has a protection policy. Objects are unprotected if they are both not protected and not ignored.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDATABASEERROR - A database-related error occurred while processing the request. Try again later.
Inputs
- group-name-or-id => group-name-or-id, optional
Name or identifier of a group. Only objects inside the specified group will be counted. If no value is specified, then all objects are counted, regardless of group.
Outputs
Get counts for certain types of objects for displaying on the Protection Manager Dashboard. The types of objects are: datasets, volumes, qtrees, OSSV directories and application resources. For each object type, the number of protected, unprotected and ignored objects are returned.
An object is considered to be protected if it is: - 1. A dataset with either a single node protection policy attached to it. Or a dataset with no protection policy attached but has an application policy attached to it. Or a dataset that has a multi-node protection policy attached to it and has conformed at least once.
- 2. Any volume or qtree in a Protection Manager managed relationship.
- 3. Any host or aggregate in a dataset with protection policy assigned.
- 4. Any host in any resource pool (does not have to be associated with any dataset).
- 5. Any aggregate that is either in any resource pool themselves, or is a child of a host that is a member of any resource pool.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDATABASEERROR - A database-related error occurred while processing the request. Try again later.
Inputs
- group-name-or-id => group-name-or-id, optional
Name or identifier of a group. Only objects inside the specified group will be counted. If no value is specified, then all objects are counted, regardless of group.
Outputs
- ignored-dataset-count => integer
Number of ignored datasets.
Range: [0..2^31-1]
- ignored-ossv-directory-count => integer
Number of ignored OSSV directories.
Range: [0..2^31-1]
- ignored-qtree-count => integer
Number of ignored qtrees.
Range: [0..2^31-1]
- ignored-volume-count => integer
Number of ignored volumes.
Range: [0..2^31-1]
- protected-dataset-count => integer
Number of protected datasets.
Range: [0..2^31-1]
- protected-ossv-directory-count => integer
Number of protected OSSV directories.
Range: [0..2^31-1]
- protected-qtree-count => integer
Number of protected qtrees.
Range: [0..2^31-1]
- protected-volume-count => integer
Number of protected volumes.
Range: [0..2^31-1]
- unprotected-dataset-count => integer
Number of unprotected datasets.
Range: [0..2^31-1]
- unprotected-qtree-count => integer
Number of unprotected qtrees.
Range: [0..2^31-1]
- unprotected-volume-count => integer
Number of unprotected volumes.
Range: [0..2^31-1]
Abort a job. A request is sent to abort the job. The job will go into an aborting state and will get aborted after sometime.Inputs
- job-id => integer
Identifier of job to abort. Range: [1..2^31-1].
Outputs
Purge a specified job from the database. A running job can not be purged. When a job is purged from the database all information is lost.Inputs
- job-id => integer
Identifier of job to purge. If specifed, only this job is purged. Range: [1..2^31-1].
Outputs
Ends iteration to list jobs.Inputs
- tag => string
Tag from a previous dp-job-list-iter-start.
Outputs
Get next few records in the iteration started by dp-job-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1].
- tag => string
Tag from a previous dp-job-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1].
Starts iteration to list jobs. The jobs that match all the specified filters will be returned.Inputs
- job-context-obj-name-or-id => string, optional
Name or identifier of the job context object. The object can be of type host, aggregate or dataset only. If specified, only those jobs that ran in the context of the specified object is returned. If not specified, all jobs will be listed. This field is ignored if either obj-name-or-id or job-id is specified.
- job-id => integer, optional
Identifier of a job to list. If unspecified, all jobs are listed. Range: [1..2^31-1].
- job-type => dp-job-type, optional
If specified, only jobs of specified type are listed. This field is deprecated in favor of job-types. Only one of job-type or job-types can be specified.
- max-jobs => integer, optional
If specified, this is the maximum number of jobs that the client wishes to receive at once. If set to zero, return all jobs. The default value of this parameter is 50,000. Range: [0..2^31-1]
- object-name-or-id => obj-name-or-id, optional
Name or ID of the dataset, a resource group or a vFiler unit. In case of datasets or vFiler units, jobs carried out on them are listed. In case of resource groups, jobs carried out on datasets or vFiler units which are members of the resource group are listed.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-job-list-iter-next. Range: [1..2^31-1].
- tag => string
Tag to be used in subsequent calls to dp-job-list-iter-next.
Ends iteration to list progress of job.Inputs
- tag => string
Tag from a previous dp-job-progress-event-list-iter-start.
Outputs
Get next few records in the iteration started by dp-job-progress-event-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1].
- tag => string
Tag from a previous dp-job-progress-event-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1].
Starts iteration to list job progress events. The event could be one of the following type. - 'job-start'
- 'job-progress'
- 'job-abort'
- 'job-end'
- 'job-retry'
- 'rel-create-start'
- 'rel-create-progress'
- 'rel-create-end'
- 'rel-destroy-start'
- 'rel-destroy-progress'
- 'rel-destroy-end'
- 'snapshot-create'
- 'snapshot-delete'
- 'backup-create'
- 'backup-delete'
- 'snapvault-start'
- 'snapvault-progress'
- 'snapvault-end'
- 'snapmirror-start'
- 'snapmirror-progress'
- 'snapmirror-end'
- 'restore-start'
- 'restore-progress'
- 'restore-end'
- 'migrate-start'
- 'migrate-progress'
- 'migrate-end'
- 'mirror-break-script-start'
- 'mirror-break-script-end'
- 'mirror-break-quiesce-start'
- 'mirror-break-quiesce-end'
- 'mirror-break-start'
- 'mirror-break-end'
- 'volume-create'
- 'volume-resize'
- 'volume-option-set'
- 'snapshot-reserve-resize'
- 'volume-autosize'
- 'snapshot-autodelete'
- 'lun-create'
- 'lun-destroy'
- 'lun-map'
- 'lun-unmap'
- 'igroup-create'
- 'igroup-destroy'
- 'igroup-add'
- 'igroup-remove'
- 'qtree-create'
- 'quota-set'
- 'nfsexport-create'
- 'cifs-share-create'
- 'cifs-share-modify'
- 'cifs-share-delete'
- 'volume-destroy'
- 'qtree-destroy'
- 'vfiler-storage-add'
- 'script-run'
- 'volume-dedupe'
- 'volume-dedupe-enable'
- 'volume-dedupe-schedule-set'
- 'vfiler-create'
- 'vfiler-setup'
- 'aggregate-space'
- 'storage-system-discover'
- 'resource-pool-create'
- 'resource-pool-members-add'
- 'storage-service-configure'
- 'host-service-operation-start'
- 'host-service-operation-end'
- 'host-service-operation-abort'
- 'host-service-operation-progress'
Inputs
- connection-id => integer, optional
Identifier of the connection in the policy being used by the job. If not specified, then progress events for all jobs are returned. Range: [1..2^31-1].
- dataset-name-or-id => string, optional
Identifier of the dataset to list progress events for. If not specified progress events for all jobs are returned. Only dataset-name-or-id or job-id may be supplied, but not both. This field is deprecated in favor of object-name-or-id.
- history => boolean, optional
If FALSE return only the most recent progress events for job. The most recent progress events define the current progress of the job. For example a backup job generates a job-progress event which says "Retrieving preferred interfaces" and then when the interfaces are retrieved it generates an event "Retrieved preferred interfaces", the earlier then moves to history. The latter one becomes the current event. If TRUE all the events are returned. Default value is FALSE.
- job-id => integer, optional
Identifier of the job to list progress events for. Only dataset-name-or-id or job-id may be supplied, but not both. If neither is supplied, all progress events are returned. Range: [1..2^31-1].
- job-type => dp-job-type, optional
Type of the job associated with the event. If not specified progress events for all job types are returned. This field is deprecated in favor of job-types.
- job-types => job-type[], optional
List of job types associated with the progress events. Only progress events of jobs of these job types are listed. If not specified, progress events for all job types are returned. Only one of job-type or job-types can be specified.
- object-name-or-id => obj-name-or-id, optional
Identifier of the object to list progress events for. If not specified, progress events for all jobs are returned. Only object-name-or-id or job-id may be supplied, but not both. Only one of dataset-name-or-id or object-name-or-id can be specified.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-job-progress-event-list-iter-next. Range: [1..2^31-1].
- tag => string
Tag to be used in subsequent calls to dp-job-progress-event-list-iter-next.
Purge all completed jobs from the database. Purged jobs are removed from the database, all information is lost.Inputs
- job-id => integer, optional
Identifier of job to purge. If specifed, only this job is purged. Range: [1..2^31-1].
Outputs
Gets time when job schedule last changed. This is used by the scheduler service to reload list of jobs that need to run in future. Jobs that are already running are not affected.Inputs
Outputs
Ends iteration to list scheduled jobs.Inputs
- tag => string
Tag from a previous dp-job-schedule-list-iter-start.
Outputs
Get next few records in the iteration started by dp-job-schedule-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [0..2^31-1]
- tag => string
Tag from a previous dp-job-schedule-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1]
Starts iteration to list jobs that need to be started by scheduler in specified time range. Scheduler service periodically requests a list of scheduled data protection jobs that will need to start within next few hours. Each item in this list is a description of what needs to be done and time when it needs to be done.
For example, a scheduled job might be: take a local snapshot on node 1 of dataset ds1 at 05/10/2006 9:00 AM UTC using "hourly" retention.
Scheduler service is responsible for storing job into the database and starting the job at its start time.Inputs
- dataset-name-or-id => string, optional
If specified, list of jobs is filtered by this dataset. Range: [1..2^31-1]
- from-time => integer
Jobs to be started from this time will be listed. Value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- to-time => integer
Jobs to be started until this time (inclusive) will be listed. Must be equal to or greater than from-time. Value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-job-schedule-list-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-job-schedule-list-iter-next.
Terminate an iteration started by dp-ossv-application-list-info-iter-start and clean up any saved info.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- tag => string
The tag returned in a previous call to dp-ossv-application-list-info-iter-start.
Outputs
Returns items from a previous call to dp-ossv-application-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
The tag returned in a previous dp-ossv-directory-application-list-info-iter-start call
Outputs
- records => integer
The number of records actually returned.
Browse the application components supported by an OSSV host.Inputs
- ossv-application-path => string, optional
The path of the application to browse. If specified it must be an the application path returned by dp-ossv-application-list-info-iter-start. If not specified, it is same as an empty string and all the Virtual Machines reported by the OSSV host will be returned. Length: [1..255]
Outputs
- records => integer
Number indicating how many items are available for future retrieval with dp-ossv-application-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-ossv-application-list-info-iter-next or dp-ossv-application-list-info-iter-end
Terminate an iteration started by dp-ossv-application-restore-destination-list-info-iter-start and clean up any saved info.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- tag => string
The tag returned in a previous call to dp-ossv-application-restore-destination-list-info-iter-start.
Outputs
Returns items from a previous call to dp-ossv-application-restore-destination-list-info-iter-startInputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
The tag returned in a previous dp-ossv-application-restore-destination-list-info-iter-start call.
Outputs
- records => integer
The number of records actually returned.
List all the OSSV hosts through which a restore to the original location is possible for a given application.Inputs
Outputs
- records => integer
Number indicating how many items are available for future retrieval with dp-ossv-application-restore-destination-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-ossv-application-restore-destination-list-info-iter-next or dp-ossv-application-restore-destination-list-info-iter-end
Add a new OSSV directory to the list of discovered directories. If the directory already exists, return its object ID.
Error conditions: - EOBJECTAMBIGUOUS - The specified host name could refer to 2 or more hosts. Try again with a host ID or a fully qualified host name or IP address. There can be two applications with the same name. Try specifying the application path.
- EOBJECTNOTFOUND - The host name, IP address, or host ID was not found.
- EACCESSDENIED - Access was denied on the requested host(s).
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDIRDOESNOTEXIST - The specified directory was not found.
Inputs
- directory-path => string
Directory path to add. This path must be an absolute path on the given host. In case of adding an application component this should be the ossv-application-path returned by dp-ossv-application-list-info-iter-next.
- ossv-host-name-or-id => ossv-host-name-or-id
Name or ID of an OSSV host.
Outputs
Terminate an iteration started by dp-ossv-directory-browse-iter-start and clean up any saved info.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- tag => string
The tag returned in a previous call to dp-ossv-directory-browse-iter-start.
Outputs
Returns items from a previous call to dp-ossv-directory-browse-iter-start.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- maximum => integer
The maximum number of entries to retrieve.
Range: [1..2^31-1]
- tag => string
The tag returned in a previous dp-ossv-directory-browse-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Get the list of subdirectories beneath a given directory on an OSSV host.
Error conditions: - EOBJECTAMBIGUOUS - The specified host name could refer to 2 or more hosts. Try again with a host ID or a fully qualified host name or IP address.
- EOBJECTNOTFOUND - The host name, IP address, or host ID was not found.
- EACCESSDENIED - Access was denied on the requested host(s).
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EDIRDOESNOTEXIST - The specified directory was not found.
Inputs
- directory-path => string, optional
Directory name to browse. Must be an absolute path, or an empty string. If directory-path is omitted, it is the same as an empty string.
If this value is omitted, or an empty string, then a list of all the roots of filesystems that are suitable for backup will be returned. On some systems, special backup sources will be returned as well. These special backup sources will not have a trailing slash when returned. For example, on Windows, "SystemState" is a valid backup source returned by this API, but it cannot be browsed by this API.
- ossv-host-name-or-id => ossv-host-name-or-id
Name or ID of an OSSV host.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with dp-ossv-directory-browse-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-ossv-directory-browse-iter-next or dp-ossv-directory-browse-iter-end.
Terminate an iteration started by dp-ossv-directory-discovered-iter-start and clean up any saved info.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- tag => string
The tag returned in a previous call to dp-ossv-directory-discovered-iter-start.
Outputs
Returns items from a previous call to dp-ossv-directory-discovered-iter-start.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- maximum => integer
The maximum number of entries to retrieve.
Range: [1..2^31-1]
- tag => string
The tag returned in a previous dp-ossv-directory-discovered-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
List the OSSV directories which have been discovered by the monitor. This list includes OSSV roots, and siblings of directories in backup relationships. The list can be filtered to exclude directories which are marked as "ignored" and to exclude directories which are protected.
Error conditions: - EOBJECTAMBIGUOUS - The specified host name could refer to 2 or more hosts. Try again with a host ID or a fully qualified host name or IP address. There can be two applications with the same name. Try specifying the ID of the application.
- EOBJECTNOTFOUND - The lookup object was not found.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EACCESSDENIED - Access was denied on the requested directory object(s).
Inputs
- include-is-available => boolean, optional
If true, the is-available status is calculated for each directory which may make the call to this zapi take much longer. Default is false.
- is-dp-ignored => boolean, optional
If the value is true, only list directories which are marked as ignored Freya/DFM for the purpose of data protection. If the value is false, only list directories which are not marked as ignored. If the value is not specified, then list all directories.
- is-in-backup-relationship => boolean, optional
If this value is true, only list directories which are in a backup relationship. If this value is false, only list directories which are not in a backup relationship. If this value is unspecified, then list all directories.
- is-in-dataset => boolean, optional
If the value is true, only list directories which are members of a dataset. If the value is false, only list directories which are not members of datasets. If the value is not specified, list all directories.
- object-name-or-id => string, optional
Name or identifier of an object to list OSSV directories for. The allowed object types for this argument are: - Resource Group
- Dataset
- OSSV Host
- OSSV Directory
If object-name-or-id identifies an OSSV directory, that single OSSV directory will be returned. If object-name-or-id resolves to more than one OSSV directory, all of them will be returned. If no object-name-or-id is provided, all OSSV directories will be listed. The same applies to OSSV applications as well.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with dp-ossv-directory-discovered-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-ossv-directory-discovered-iter-next or dp-ossv-directory-discovered-iter-end.
Modify a directory's information. If modifying of one property fails, nothing will be changed.
Error conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified directory.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EOBJECTNOTFOUND - When the dp-ossv-directory-name-or-id does not correspond to an ossv directory.
- EDATABASEERROR - On database error.
Inputs
- is-dp-ignored => boolean, optional
True if an administrator has chosen to ignore this object for purposes of data protection.
Outputs
Terminate an iteration started by dp-ossv-directory-roots-iter-start and clean up any saved info.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- tag => string
The tag returned in a previous call to dp-ossv-directory-roots-iter-start.
Outputs
Returns items from a previous call to dp-ossv-directory-roots-iter-start.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- maximum => integer
The maximum number of entries to retrieve.
Range: [1..2^31-1]
- tag => string
The tag returned in a previous dp-ossv-directory-roots-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Get the list of filesystem roots from one or more OSSV hosts.
Error conditions: - EOBJECTAMBIGUOUS - The specified host name could refer to 2 or more hosts. Try again with a host ID or a fully qualified host name or IP address.
- EOBJECTNOTFOUND - The host name, IP address, or host ID was not found.
- EACCESSDENIED - Access was denied on the requested host(s).
- EINTERNALERROR - An error occurred while processing the request. Try again later.
Inputs
- fs-info => boolean, optional
If fs-info is specified and true, then a list of all filesystems is returned instead of a list of root of filesystems suitable for backup. For example, CD-ROM drives and NFS mounts will be returned if fs-info is true. Additional information will be included in the output, specifically the filesystem device, type and state, as reported by NDMP. While this flag causes this API to return more data, the recommended way to find filesystems suitable for backup is to set this flag to false.
- ossv-host-name-or-id => ossv-host-name-or-id, optional
Name or ID of an OSSV host. If no OSSV host is specified, roots are returned for all OSSV hosts.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with dp-ossv-directory-roots-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-ossv-directory-roots-iter-next or dp-ossv-directory-roots-iter-end.
Create a new DP policy by copying from an existing "template" policy. The new policy created using this ZAPI has the same set of nodes and connections as the template policy, and the same property values for each node and connection.
Error conditions: - EACCESSDENIED - User does not have privileges to read the template policy from the database, or create a new policy, or both.
- EOBJECTNOTFOUND - No template policy was found that has the given name or ID.
- EPOLICYEXISTS - A policy with the given dp-policy-name already exists.
- EDATABASEERROR - A database error occurred while processing the request.
- EAPIMISSINGARGUMENT - No template policy name or id was supplied as an input.
- EINVALIDINPUTERROR - Policy description string was too long.
Inputs
- dp-policy-description => string, optional
Description of the new policy. It may contain from 0 to 255 characters. If the length is greater than 255 characters, the ZAPI fails with error code EINVALIDINPUTERROR. The default value is the empty string "".
- template-dp-policy-id => obj-id, optional
An object ID for the template policy that is copied to create the new policy. Any policy may serve as a template; there is no special type of policy for templates. This legacy parameter is only used if template-dp-policy-name-or-id is not supplied.
- template-dp-policy-name-or-id => obj-name-or-id, optional
Name or Identifier for the template policy that is copied to create the new policy. Any policy may serve as a template; there is no special type of policy for templates. This input is preferred over template-dp-policy-id and if supplied then do not supply template-dp-policy-id because template-dp-policy-id will be ignored.
Outputs
Destroy a DP policy. This removes it from the database. If the policy has been applied to any datasets, then the destroy operation fails; you must first un-map the policy from any datasets to which it had been applied before you may destroy the policy.
If this ZAPI fails for any reason, the DP policy edit of which it was a part is not rolled back. Instead, the edit is restored to the state it was in prior to invoking this ZAPI.
After this ZAPI successfully completes, any subsequent calls in the same edit session to dp-policy-destroy or dp-policy-modify fail with error code EOBJECTNOTFOUND.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
- EOBJECTNOTFOUND - The policy was already destroyed during this edit session.
- EPOLICYNOTMODIFIABLE - The policy is one of the samples created during installation, and therefore cannot be deleted.
Inputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by dp-policy-edit-begin.
Outputs
Create an edit session and obtain an edit lock on a DP policy to begin modifying the policy. An edit lock must be obtained before invoking the following APIs:
- dp-policy-modify
- dp-policy-destroy
Use dp-policy-edit-commit to end the edit session and commit the changes to the database. Use dp-policy-edit-rollback to end the edit session and discard any changes made to the policy.
24 hours after an edit session on a policy begins, any subsequent call to dp-policy-edit-begin for that same policy automatically rolls back the existing edit session and begins a new edit session, just as if the call had used the force option. If there is no such call, the existing edit session simply continues and retains the edit lock.
Error conditions: - EEDITINPROGRESS - Another edit session already has an edit lock on the specified policy.
- EOBJECTNOTFOUND - No policy was found that has the given name or ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- force => boolean, optional
If true, and an edit is already in progress on the specified policy, then the previous edit is rolled back and a new edit is begun. If false, and an edit is already in progress, then the call fails with error code EEDITINPROGRESS. Default value is false.
Outputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value is an unsigned 32-bit integer.
Commit changes made to a DP policy during an edit session into the database. If all the changes to the policy are performed successfully, the entire edit is committed and the edit lock on the policy is released.
If any of the changes to the policy are not performed successfully, then the edit is rolled back (none of the changes are committed) and the edit lock on the policy is released.
Use the dry-run option to test the commit. Using this option, the changes to the policy are not committed to the database. Instead, a conformance check is performed to return a list of actions that the system would take if the changes were committed by calling this ZAPI without the dry-run option.
If dry-run is false, and all changes are successfully committed, then before the call returns, the system begins a conformance run on all datasets to which the policy has been applied. (See dataset-conform-begin for a description of conformance runs.) If any needed conformance actions require user confirmation, it is assumed that such confirmation has been given, and the actions are performed.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
- EPOLICYEXISTS - The policy's name is being changed, and a policy with the new name already exists.
- EDPPOLICYINUSE - The policy is being deleted, but deletion has failed because the policy is in use by one or more datasets.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- dry-run => boolean, optional
If true, return a list of the actions the system would take after committing the changes to the policy, but without actually committing the changes. In addition, the edit lock is not released. By default, dry-run is false.
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by dp-policy-edit-begin.
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Only returned if dry-run is true.
Roll back changes made to a DP policy. The edit lock on the policy will be released after the rollback.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
Inputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by dp-policy-edit-begin.
Outputs
Returns default values for node and connection properties. These default values are used in calls to dp-policy-modify, in cases where the property is present for a node or connection, but the corresponding optional XML element does not appear in the dp-policy-node-info or dp-policy-connection-info structure. Note that default values may change from release to release. This ZAPI provides a convenient way to determine the default values for the current release.
Error conditions: None.Inputs
Outputs
Terminate a list iteration that had been started by a call to dp-policy-list-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- tag => string
The opaque handle returned by the prior call to dp-policy-list-iter-start that started this list iteration.
Outputs
Retrieve the next series of policies that are present in a list iteration created by a call to dp-policy-list-iter-start. The server maintains an internal cursor pointing to the last record returned. Subsequent calls to dp-policy-list-iter-next return the next maximum records after the cursor, or all the remaining records, whichever is fewer. If a property is present for a particular node or connection (the presence or absence of each property is defined in its description), then it always appears in the output element for that node or connection from a call to dp-policy-list-iter-next. For example, the output dp-policy-connection-info element for a backup connection always contains a backup-schedule-name.
If a property is absent for a particular node or connection, then it never appears in the output element for that node or connection. For example, the output dp-policy-connection-info element for a mirror connection never contains a backup-schedule-name.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
- EOBJECTNOTFOUND - A schedule or throttle referenced by the policy could not be found.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- maximum => integer
The maximum number of policies to return. Range: [1..2^31-1].
- tag => string
The opaque handle returned by the prior call to dp-policy-list-iter-start that started this list iteration.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Begin a list iteration over all content in all DP policies in the system. Optionally, you may iterate over the content of just a single policy. After calling dp-policy-list-iter-start, you continue the iteration by calling dp-policy-list-iter-next zero or more times, followed by a call to dp-policy-list-iter-end to terminate the iteration.
Error conditions: - EACCESSDENIED - User does not have privileges to read the specified policy.
- EOBJECTNOTFOUND - No policy was found that has the given name or ID.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- dp-policy-name-or-id => obj-name-or-id, optional
Name or ID of a DP policy. If specified, only this policy is listed. If not specified, then by default, all policies are listed.
- is-dr-capable => boolean, optional
If present we filter the policies returned. If true, only list datasets which are disaster recovery capable. If false, only list datasets which are not disaster recovery capable.
Outputs
- records => integer
Number of items present in the list iteration. Range: [0..2^31-1].
- tag => string
An opaque handle used to identify the list iteration. The list content resides in a temporary store in the server.
This ZAPI modifies a DP policy by completely replacing the policy's old content with new content specified by input element dp-policy-content. This ZAPI can only change a policy's name, description, and the properties of its nodes and most of the properties of it's connections. The connection property of is-dr-capable cannot be changed once a policy is created. This ZAPI also cannot change the topology (set of nodes and connections between nodes) of a policy; instead, the topology specified in dp-policy-content must match the policy's current topology. At present, there is no way to change the topology of a policy. If a property is absent for a particular node or connection (the presence or absence of each property is defined in its description), then it is illegal for that property to appear in the input element for that node or connection in a call to dp-policy-modify. For example, it is illegal to specify a backup-schedule-name in a dp-policy-connection element for a mirror connection.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
- EOBJECTNOTFOUND - The policy was already destroyed during this edit session.
- EPOLICYNOTMODIFIABLE - The policy is one of the samples created during installation, and therefore cannot be modified.
- EPOLICYTOPOLOGYCHANGED - The requested modification changes the topology of the policy.
- EINVALIDPOLICYPROPERTY - The requested modification would set a node or connection property to an invalid value. This includes the case when a property value was specified, but the property is not present for that node or connection.
- EOSSVCANTTAKESNAPSHOTS - A snapshot schedule is being set for the policy's root node, but the policy has been applied to a dataset whose root storage set contains OSSV directories.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- dp-policy-content => dp-policy-content
New content for the policy. The topology (set of nodes and connections between nodes) specified by this parameter must be the same as the current topology of the policy; otherwise, it is an error. In particular, you may not change the id of any connection or node. In addition, you may not change the from-node-id or to-node-id of any connection.
All properties of the nodes and connections are set precisely as specified by the dp-policy-connections and dp-policy-nodes elements of dp-policy-content. To leave the value of a property unchanged, its old value must be specified explicity. The only exception is that if the element that specifies the property's value is optional, and the old value of the property happens to be the same as the default value, then you may omit the specification of the old value to keep the old value.
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by dp-policy-edit-begin.
Outputs
Ends iteration to list data protection relationships.Inputs
- tag => string
Tag from a previous dp-relationship-list-info-iter-start.
Outputs
Get next few records in the iteration started by dp-relationship-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [0..2^31-1]
- tag => string
Tag from a previous dp-relationship-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1]
Starts iteration to list data protection relationships. These are SnapVault or SnapMirror relationships formed in order to implement data protection policy for a dataset. You can list relationships for a single policy connection or for a particular source or destination storage server.Inputs
- connection-id => integer, optional
Identifier of policy connection. If present, only list relationships protecting this connection of the data protection policy. If specified, dataset-name-or-id must also be specified. Range: [1..2^31-1]
- dataset-name-or-id => obj-name-or-id, optional
Name or identifier of dataset. Relationships protecting any connection of this dataset are listed.
- destination-id => obj-id, optional
Identifier of the destination Qtree, Volume, Aggregate or Storage System. If not present, relationships to all destinations are listed.
- include-deleted => boolean, optional
If present and true, relationships which are marked as deleted in the database are also returned. Otherwise, deleted relationships are not returned.
- is-dp-ignored => boolean, optional
If present and true, only list relationships for which the source object has been marked as ignored for data protection. If present and false, only list relationships for which the source object has not been marked as ignored. If not present, list all relationships.
- is-dp-managed => boolean, optional
If present and true, only list relationships which are to be managed by Protection Manager. If present and false, only list relationships which are not to be managed by Protection Manager. If not present, list all relationships.
- is-in-dataset => boolean, optional
If present and true, only list relationships which are in a dataset. If present and false, only list relationships for which are not in a dataset. If not present, list all relationships.
- is-migration-relationship => boolean, optional
If true, only relationships created for migration are returned. If dataset-name-or-id is specified with this flag, the migration relationships for the members of that dataset will be returned along with the relationship of the root volume/qtree of the vFiler unit associated with the root node of the dataset. It is invalid to specify is-in-dataset or connection-id fields when is-migraton-relationship is true. If false or not present all relationships will be returned. Default value is false.
- source-id => obj-id, optional
Identifier of the source Qtree, OSSV Directory, Volume, Aggregate, Storage System, or OSSV host. If not present, relationships from all sources are listed.
- source-or-destination-id => obj-id, optional
Identifier of either a source or a destination object. If empty, all relationships are listed. The source or destination object must be either a Host, Aggregate, Volume, Qtree, OSSV Directory or DFM group.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with dp-relationship-list-info-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to dp-relationship-list-info-iter-next.
Modify settings of a SnapVault, Qtree SnapMirror or Volume SnapMirror relationship.Inputs
- is-dp-imported => boolean, optional
If present and true this relationship will be marked as imported but only if is-dp-managed is also present and true. It is an error for is-dp-imported to be true when is-dp-managed is not present or present and false. If is-dp-imported is present and false, clear the is-dp-imported flag reguardless of the is-dp-managed being present. If a relationship has the is-dp-imported flag set then the relationship will not be cleaned up after a containing dataset has been deleted unless the fully automatic cleanup mode is active.
- is-dp-managed => boolean, optional
If present and true, set the is-dp-managed flag for this relationship. If present and false, clear the is-dp-managed flag for this relationship. If not present, is-dp-managed flag is not changed. If a relationship has the is-dp-managed flag set, then Protection Manager is allowed to modify the relationship as necessary to make the relationship conform to a data protection policy. This includes modifying any settings or schedules of the relationship, or deleting the relationship if no longer needed. Protection Manager also has the responsibility of updating the relationship as necessary, either using its own schedules or setting an ONTAP schedule for this relationship.
If the is-dp-managed flag is clear, Protection Manager will not modify, update, or delete this relationship.
Outputs
Start a restore operation on part of a dataset. This operation copies whole members or its sub-paths of the dataset from a specific backup version to a new location. The operation is performed in the background by a job.
Error conditions: - EOBJECTNOTFOUND - The dataset name or ID or one of the member name or ID was not found.
- EACCESSDENIED - The dataset exists, but the user invoking the API has no DFM.BackupManager.Restore or DFM.BackupManager.RestoreFromSecondary permission on the dataset or on any of the members, or DFM.BackupManager.Restore on the destination host.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDINPUT - The number of paths to restore was either 0 or more than 1000 or neither backup-id nor backup-version was specified.
- EINVALIDHOST - One of the specified destination hosts was not a valid restore destination.
- EDATABASEERROR - A database error occurred
Inputs
- backup-id => integer, optional
Identifier of the backup instance to restore from. If this parameter is specified, then backup-version is ignored.
Range: [1..2^31-1]
- backup-version => dp-timestamp, optional
Timestamp of the backup which should be restored from. If the backup-id is not specified, this parameter is required. A backup location that matches this version will be picked to do the restore from.
- check-overwrite => boolean, optional
If true, checks if the member to be restored overwrites existing data on the destination. This flag cannot be used if the destination host is an OSSV or an ESX server. If it is used, an error is returned. It can be set to true only if the members to be restored are files. If the restore path has any members other than files, an error is returned. If false, goes ahead with the restore job without checking for overwrites at the destination. By default, it is false.
- check-space-status => boolean, optional
If true, checks if there is enough space on the destination to restore the member. This flag cannot be used if the destination host is an OSSV or an ESX server. If it is used, an error is returned. It can be set to true only if the members to be restored are files. If the restore path has any members other than files, an error is returned. If the flag is not specified or set to false, goes ahead with the restore job without checking the space status at the destination. By default, it is false.
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset to restore part of.
Outputs
- job-id => integer, optional
Id of the job that handles the restore operation. It is returned only if "check-overwrite" and "check-space-status" are false. If either of them is set to true, the API only checks overwrites and/or space status and returns. The restore operation is not performed .
Range: [1..2^31-1]
- space-status-results => space-status-result[], optional
Results of check-space-status. Each result will have information about the destination volume. Returned if check-space-status is true and if the destination volume does not have enough space for enough space for the restore. Returned if check-space-status is true and if the destination volume does not have enough space for the restore. If the destination volume has enough space, this element is not returned.
Start a restore operation on part of a dataset. This operation copies files and/or directories from a specific backup version back to the primary location. The operation is performed in the background by a job.
Error conditions: - EOBJECTNOTFOUND - The dataset name or ID or one of the member name or ID was not found.
- EACCESSDENIED - The dataset exists, but the user invoking the API has no DFM.BackupManager.Restore or DFM.BackupManager.RestoreFromSecondary permission on the dataset or on any of the members.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDINPUT - The number of paths to restore was either 0 or more than 1000 or neither backup-id nor backup-version was specified.
- EDATABASEERROR - A database error occured.
- ENDRESTORENOTPOSSIBLE - A partial qtree restore is requested for a dataset which requires a non disruptive restore.
Inputs
- backup-id => integer, optional
Identifier of the backup instance to restore from. If this parameter is specified, then backup-version is ignored.
Range: [1..2^31-1]
- backup-version => dp-timestamp, optional
Timestamp of the backup which should be restored from. If the backup-id is not specified, this parameter is required. A backup location that matches this version will be picked to do the restore from.
- check-overwrite => boolean, optional
If true, checks if the member to be restored overwrites existing data on the destination. This flag cannot be used if the destination host is an OSSV or an ESX server. If it is used, an error is returned. It can be set to true only if the members to be restored are files. If the restore path has any members other than files, an error is returned. If false, goes ahead with the restore job without checking for overwrites at the destination. By default, it is false.
- check-space-status => boolean, optional
If true, checks if there is enough space on the destination to restore the member. This flag cannot be used if the destination host is an OSSV or an ESX server. If it is used, an error is returned. It can be set to true only if the members to be restored are files. If the restore path has any members other than files, an error is returned. If the flag is not specified or set to false, goes ahead with the restore job without checking the space status at the destination. By default, it is false.
- dataset-name-or-id => obj-name-or-id
Name or ID of the dataset to restore part of.
Outputs
- job-id => integer, optional
Id of the job that handles the restore operation. It is returned only if "check-overwrite" and "check-space-status" are false. If either of them is set to true, the API only checks overwrites and/or space status and returns. The restore operation is not performed .
Range: [1..2^31-1]
- overwrite-results => overwrite-result[], optional
Results of a check-overwrite. Each result will have information about the file that overwrites data on the destination. Returned if check-overwrite is true and if the file being restored might overwrite existing data. This element is not returned if there are no overwrites.
- space-status-results => space-status-result[], optional
Results of check-space-status. Each result will have information about the destination volume. Returned if check-space-status is true and if the destination volume does not have enough space for enough space for the restore. Returned if check-space-status is true and if the destination volume does not have enough space for the restore. If the destination volume has enough space, this element is not returned.
Start an operation to restore one or more objects in the virtual infrastructure. This API cannot be used to restore an entire dataset.
Error conditions: - EOBJECTNOTFOUND - The object name or ID was not found.
- EACCESSDENIED - The object exists, but the user invoking the API has no DFM.BackupManager.Restore or DFM.BackupManager.RestoreFromSecondary permission on the object.
- EAPPOBJECTNOTFOUNDINBACKUP - The object does not exist in the backup from which the user is attempting to restore.
- EAPPOBJECTNOTRESTORABLE - The object exists in the backup, but is not restorable.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDINPUT - The number of objects to restore was either 0 or more than 1000 or input parameters were either missing or incorrectly specified.
- EDATABASEERROR - A database error occurred.
Inputs
- backup-id => integer
Identifier of the backup instance to restore from.
Range: [1..2^31-1]
Outputs
- job-id => integer
Id of the job that handles the restore operation for virtual objects.
Range: [1..2^31-1]
Get the content of a given schedule.Inputs
Outputs
Create a new schedule with the given name. The schedule type may be daily, weekly, or monthly.Inputs
Outputs
Create a single snapshot within a daily schedule.Inputs
- daily-content => daily-info
Content of the daily schedule.
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
- item-id => integer
An ID of the daily item within the schedule
Delete a single snapshot within a daily schedule.Inputs
- daily-schedule-name-or-id => string
Name or ID of a daily schedule
- item-id => integer
An ID of the daily item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Modify a single snapshot within a daily schedule. Sample schedules cannot be modified.Inputs
- daily-content => daily-info
Content of the daily schedule.
- daily-schedule-name-or-id => string
Name or ID of a daily schedule
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Return a list of other DP policies and DP schedules using the specified DP schedule.Inputs
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- schedule-name-or-id => obj-name-or-id
Name or identifier of a DP schedule.
Outputs
- schedule-assignees => schedule-assignee[]
List of other DP polices and DP schedules using the specified DP schedule. The list excludes DP policies or DP schedules that the caller has no permissions to read.
- schedule-in-use => boolean
True if the schedule is in use by other DP policies or DP schedules.
Delete a schedule with the given name or ID. A schedule that is used by another schedule(s) may not be deleted and an error will be returned. Sample schedules cannot be destroyed.Inputs
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- schedule-name-or-id => string
Name or ID of the schedule
Outputs
Create an hourly schedule within a daily schedule. An hourly schedule specifies the frequency of snapshots to be run within the start time and end time.Inputs
- daily-schedule-name-or-id => string
Name or ID of a daily schedule
- hourly-content => hourly-info
Content of the hourly schedule.
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
- item-id => integer
An ID of the hourly item within the schedule
Delete an hourly schedule within a daily schedule.Inputs
- daily-schedule-name-or-id => string
Name or ID of a daily schedule
- item-id => integer
An ID of the hourly item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Modify an hourly schedule within a daily schedule. Sample schedules cannot be modified.Inputs
- daily-schedule-name-or-id => string
Name or ID of a daily schedule
- hourly-content => hourly-info
Content of the hourly schedule.
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
List all existing schedule IDs and types.Inputs
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Tell the DFM station that the temporary store associated with the specified tag is no longer necessaryInputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Iterate over the list of schedules stored in the temporary store. The DFM internally maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.Inputs
- maximum => integer
Maximum number of schedules to retrieve
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
The dp-dpschedule-list-info-iter-* set of APIs are used to retrieve a list of schedule contentsInputs
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
- records => integer
Number of items saved for future retrieval
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Modify a schedule's details in the database. When the zapi is called, all details within the schedule will be removed and replace by the new details specified in schedule-content. Sample schedules cannot be modified. schedule-id and schedule-type cannot be modified.Inputs
Outputs
Specify a single snapshot within a monthly schedule. Either day-of-month, or both week-of-month and day-of-week must be specified.Inputs
- monthly-content => monthly-info
Content of the monthly schedule.
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
- item-id => integer
An ID of the monthly item within the schedule
Delete a single snapshot within a monthly scheduleInputs
- item-id => integer
An ID of the monthly item within the schedule. Range: [1..(2^31)-1]
- monthly-schedule-name-or-id => string
Name or ID of a weekly schedule
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Modify a single snapshot within a monthly schedule. Sample schedules cannot be modified.Inputs
- monthly-content => monthly-info
Content of the monthly schedule.
- monthly-schedule-name-or-id => string
Name or ID of a monthly schedule
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Specify a sub-schedule to be used by a monthly schedule. On top of the individual monthly events, a monthly schedule may only have 1 daily subschedule OR 1 weekly schedule. If this monthly schedule already has a daily or weekly schedule, this command replaces the old one.Inputs
- monthly-schedule-name-or-id => string
Name or ID of the monthly schedule
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Unset a sub-schedule used by a monthly schedule.Inputs
- monthly-schedule-name-or-id => string
Name or ID of the monthly schedule
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
Outputs
Rename a schedule. Sample schedules cannot be renamed.Inputs
- new-schedule-name => string
A new unique name of the schedule
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- schedule-name-or-id => string
Name or ID of the schedule
Outputs
Specify a single snapshot within a weekly schedule.Inputs
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- weekly-content => weekly-info
Content of the weekly schedule.
Outputs
- item-id => integer
An ID of the weekly item within the schedule
Delete a single snapshot within a weekly schedule.Inputs
- item-id => integer
An ID of the weekly item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- weekly-schedule-name-or-id => string
Name or ID of a weekly schedule
Outputs
Modify a single snapshot within a weekly schedule. Sample schedules canot be modified.Inputs
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- weekly-content => weekly-info
Content of the weekly schedule.
- weekly-schedule-name-or-id => string
Name or ID of a weekly schedule
Outputs
Specify which daily schedule will be used on a certain range of days within a weekly scheduleInputs
- daily-schedule-name-or-id => string
Name or ID of a daily schedule
- end-day-of-week => integer
Day of week for the snapshot. Range: [0..6] (0 = "Sun")
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- start-day-of-week => integer
Day of week for the snapshot. Range: [0..6] (0 = "Sun")
- weekly-schedule-name-or-id => string
Name or ID of a weekly schedule
Outputs
- item-id => integer
An ID of the weekly use item within the schedule. Range: [1..(2^31)-1]
Specify which daily schedule to be deleted within a weekly scheduleInputs
- item-id => integer
An ID of the weekly use item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- weekly-schedule-name-or-id => string
Name or ID of a weekly schedule
Outputs
Specify which daily schedule will be used on a certain range of days within a weekly schedule. Permenent of sample schedules cannot be modified.Inputs
- daily-schedule-name-or-id => string
Name or ID of a weekly schedule
- end-day-of-week => integer
Day of week for the snapshot. Range: [0..6] (0 = "Sun")
- item-id => integer
An ID of the weekly use item within the schedule. Range: [1..(2^31)-1]
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. Default value is 'dp_schedule'.
- start-day-of-week => integer
Day of week for the snapshot. Range: [0..6] (0 = "Sun")
- weekly-schedule-name-or-id => string
Name or ID of a weekly schedule
Outputs
Create a throttle scheduleInputs
Outputs
Return a list of DP policies using the specified DP throttle.Inputs
Outputs
Delete a throttle item. Sample throttle schedules cannot be destroyed.Inputs
- throttle-id => integer
Identifier of the throttle item. Range: [1..(2^31)-1]
Outputs
Add a new throttle item to the throttle scheduleInputs
Outputs
Delete a throttle item from a throttle scheduleInputs
- throttle-name-or-id => string
Name or identifier of the throttle schedule
Outputs
Tell the DFM station that the temporary store associated with the specified tag is no longer necessaryInputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Iterate over the list of throttle items stored in the temporary store. The DFM internally maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.Inputs
- maximum => integer
Maximum number of schedules to retrieve
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
The dp-throttle-list-info-iter-* set of APIs are used to retrieve a list of throttle itemsInputs
Outputs
- records => integer
Number of items saved for future retrieval
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Update a throttle item. When the zapi is called, all details within the throttle schedule will be removed and replace by the new details specified in throttle-content. Sample throttle schedules cannot be modified.Inputs
- throttle-content => throttle-content
Content of the throttle schedule to be modified
Outputs
Rename a throttle schedule. Sample throttle schedules cannot be renamed.Inputs
- throttle-name-or-id => string
Name or ID of the throttle schedule
Outputs
Acknowledge events. This terminates repeated notifications due to that event.Inputs
- event-id => integer, optional
The identifier of the event that has to be acknowledged. Must specify either event-id or event-id-list not both. Range: [1..2^32-1]
Outputs
Delete events. Terminates repeated notifications due to the event.Inputs
- event-id => integer, optional
The identifier of the event that has to be deleted. Must specify either event-id or event-id-list not both. Range: [1..2^32-1]
- event-id-list => event-id-type[], optional
The event identifiers to be deleted. Must specify either event-id or event-id-list, not both.
Outputs
The event-generate API helps clients to generate events in the DFM systemInputs
- event-message => string, optional
A message specific to this event. This message will be displayed in the Event Details page. If not specified, nothing will be shown in the condition field of Event Details page.
- event-name => string
The event name that is being generated. Currently this can be only the event names corresponding to user-defined event added to DFM.
- source => string
The id/name of the source object of the event. This is the object id/short name/long name DFM generates for an object. Suppose, for a host, a host id/name can be found by using the CLI "dfm host list".
- timestamp => integer, optional
The date-time when the event is generated This is the number of seconds elapsed since midnight on January 1, 1970.(UTC) If not specified or invalid timestamp specified , the time on the DFM server when the api is invoked is used.
Outputs
event-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the event-list-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
The event-list-iter-next API is used to iterate over the list of events stored in the temporary store created by the event-list-iter-start API. The DFM station, internally, maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.Inputs
- maximum => integer
The maximum number of events to return.
- tag => string
An opaque handle used by the DFM station to identify the temporary store created by event-list-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [0..(2^31)-1]
List events. The event-list-iter-* set of APIs are used to retrieve the list of events. The event-list-iter-start API is used to load the list of events into a temporary store. The API returns a tag to temporary store so that subsequent APIs can be used to iterate over the list in the temporary store.
Note that, depending on the input parameters, this API may take up to "timeout" seconds to return. Subsequent calls to event-list-iter-next() will return immediately.
Inputs
- event-id => integer, optional
The 64-bit identifier of the event that has to be listed.
- event-source-id => integer, optional
Lists events generated by the specified source. The event-source-id is a name or object identifier returned by other APIs. For example, the event-source-id could refer to a storage system (from group-member-list-iter-start) or dataset (from dataset-list-iter-start). If the event-source-id identifies a group, lists events for all sources in that group.
- include-dataset-resource-status-events => boolean, optional
This input is considered only if event-source-id refers to a dataset. By default this is false. When this element is true, all events of resources associated with members of the dataset are returned. This helps in listing the set of events that constitutes the resource status of the dataset.
- include-deleted-objects => boolean, optional
if TRUE, also lists events on objects which have been marked "deleted" in DFM's database. If FALSE, only list events generated on objects that have not been "deleted" from DFM's database. This field is ignored if event-source-id is set. Default is TRUE
- include-related => boolean, optional
if TRUE, include events of related objects that resolve from event-source-id. Would be ignored if event-source-id is not specified in the input. For e.g: if event-source-id is aggregate, all the events of the aggregate and its related Storage System, Volumes, qtrees, disks and luns shall be included.
- is-acknowledged => boolean, optional
If TRUE, lists all acknowledged events. If FALSE, list all unacknowledged events. If this parameter is not specified, lists all events irrespective of their status. Default is empty.
- is-deleted => boolean, optional
if TRUE, lists all events which have been marked "deleted" in DFM's database. Such events are not normally shown in event views. If FALSE, list all un-deleted events. If this parameter is not specified, lists all events irrespective of their deletion status. Default is empty.
- is-history => boolean, optional
If TRUE, lists all events generated in DFM after it has been installed; otherwise, show only "current" events. Default is FALSE.
- is-most-severe-first => boolean, optional
If TRUE, specifies that events should be returned in descending order of severity, then by timestamp. The default is FALSE, meaning that events are returned in order of timestamp only.
- is-oldest-first => boolean, optional
If TRUE, specifies that events should be returned in ascending order of timestamp (that is, oldest events first). The default is FALSE, meaning that events are returned in descending order of timestamp (that is, newest events first).
- max-events => integer, optional
If specified, this is the maximum number of events that the client wishes to receive at once. If set to zero, return all events. The default value of this parameter is 50,000. Range: [0..2^31-1]
- object-management-filter => object-management-interface, optional
Filter the events based on the mode of event's source object. filter "cluster" indicates c-mode objects and "node" indicates 7-mode objects. If no filter is supplied, all events will be considered.
- timeout => integer, optional
Number of seconds after which the API should terminate, if no events are received matching the input criteria. If the value is 0, or not specified, the API will terminate immediately (acting as an instantaneous poll for events). If the timeout expires with no matching events, the API returns successfully with an empty list of events.
If is-history or is-deleted is set to TRUE, or if a specific event-id is specified, or if event-source-id is specified to be 0, then the timeout value is ignored.
If time-range is set, timeout is also ignored.
Outputs
- records => integer
The number of events matching the specified input criteria. This is the number of records that will be returned by subsequent calls to event-list-iter-next().
- tag => string
An opaque handle you must pass to event-list-iter-next() and event-list-iter-end() to refer to this list of events.
event-status-change-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the event-status-change-list-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
The event-status-change-list-iter-next API is used to iterate over the list of events stored in the temporary store created by the event-status-change-list-iter-start API. The DFM station, internally, maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.Inputs
- maximum => integer
The maximum number of events to return.
- tag => string
An opaque handle used by the DFM station to identify the temporary store created by event-status-change-list-iter-start.
Outputs
- events => event-info[]
Array of events managed by DFM.
- records => integer
The number of records actually returned. Range: [0..(2^31)-1]
List events that had status changes (acknowledged or deleted) within the specified time range. The event-status-change-list-iter-* set of APIs are used to retrieve the list of events that had status changes.
The event-status-change-list-iter-start API is used to load the list of events into a temporary store. The API returns a tag to temporary store so that subsequent APIs can be used to iterate over the list in the temporary store.
The returned list of events will be sorted according to when they had their status changed (either eventAcked timestamp or eventDeleted timestamp). An event that's both acked and deleted within the requested timeframe would appear twice in the returned list of events, because those would count as 2 status changes, and appear in the returned list based on acked timestamp and deleted timestamp respectively.
Note that, depending on the input parameters, this API may take up to "timeout" seconds to return. Subsequent calls to event-status-change-list-iter-next() will return immediately.
Inputs
- max-events => integer, optional
If specified, this is the maximum number of events that the client wishes to receive at once. If set to zero, return all events. The default value of this parameter is 50,000. Range: [0..2^31-1]
- time-range => event-timestamp-range
Lists all events which were generated in the range specified. If the end-time of the time-range is sometime in the future, time-out will be ignored.
- timeout => integer, optional
Number of seconds after which the API should terminate, if no events are received matching the input criteria. If the value is 0, or not specified, the API will terminate immediately (acting as an instantaneous poll for events). If the timeout expires with no matching events, the API returns successfully with an empty list of events.
Outputs
- records => integer
The number of events matching the specified input criteria. This is the number of records that will be returned by subsequent calls to event-status-change-list-iter-next().
- tag => string
An opaque handle you must pass to event-status-change-list-iter-next() and event-status-change-list-iter-end() to refer to this list of events.
Supports adding custom event classes.Inputs
- is-allow-duplicates => boolean, optional
Event service will not drop duplicate events of this event class if is-allow-duplicates is true. Event is duplicate if it has same event-name as previous event with same event-class and the same event-source. It is false by default. If an invalid value is provided, it will be considered as true.
- is-multi-current => boolean, optional
Event service should keep multiple current events of this event class for each event-source. Valid only with is-allow-duplicates. It is false by default. If an invalid value is provided, it will be considered as true.
Outputs
Supports deletion of custom event classes.Inputs
- event-class-name => string
Custom event class name or its database identifier.
Outputs
Lists all or a sub-set of the custom event classes.Inputs
Outputs
The eventclass-list-iter-* set of APIs are used to retrieve the list of event classes. eventclass-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the eventclass-list-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check eventclass-list-iter-start. The eventclass-list-iter-next API is used to iterate over the members of the event classes stored in the temporary store created by the eventclass-list-iter-start API.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous eventclass-list-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by eventclass-list-iter-start.
Outputs
- records => integer
The number of records actually returned as a result of invoking this API.
The eventclass-list-iter-* set of APIs are used to retrieve the list of event classes in DFM. The eventclass-list-iter-start API is used to load the list of event classes into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the event classes in the temporary store. If eventclass-list-iter-start is invoked twice, then two distinct temporary stores are created.Inputs
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with eventclass-list-iter-next.
- tag => string
Tag to be used in subsequent calls to eventclass-list-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Ends iteration of targets.Inputs
- tag => string
Tag from a previous fcp-target-list-info-iter-start.
Outputs
Get next set of records in the iteration started by call to fcp-target-list-info-iter-start. This zapi will fetch the fcp target info records. The input param 'maximum' specifies the number of records it will show at a time.Inputs
- maximum => integer
Maximum records to retrieve.
Range: [1..2^31-1]
- tag => string
Tag from a previous fcp-target-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Value of 0 records indicates that end of records.
Range: [1..2^31-1]
Start iteration of targets.Depending on the input it will return a tag and the number of records to be retrieved.Inputs
- object-name-or-id => obj-name-or-id, optional
Name or Id of the following objects. If Storage System name or Id is specified only the FCP targets discovered on the Storage System are returned.
The name of the FCP target should be specified in : format. Ex: storage01:0c_2 If no object-name-or-id is present in the input all the fcp targets will be fetched.
Outputs
- records => integer
Number of records fetched and stored for retrieval using fcp-target-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used for subsequent calls.
Retrieve data of all the graph lines of a graph.Inputs
- end-date => integer, optional
The number of seconds in the future that the graph should end. The graph values from the current time till the end-date will be extrapolated and used for trending. Use a negative value if the graph should stop in the past.
- graph-period => string, optional
The period for which graph data has to be returned. This returns consolidated graph data depending on the graph period. Possible values: - '1d' - graph data for 1 day.
- '1w' - graph data for 1 week.
- '1m' - graph data for 1 month.
- '3m' - graph data for 3 months.
- '1y' - graph data for 1 year.
Default value is '1d'. The default value of start-date will be the same as the graph period specified. The default values of end-date will be as follows: - 3 hours for graph period '1d'.
- 1 day for graph period '1w'.
- 4 days for graph period '1m'.
- 7 days for graph period '3m'.
- 31 days for graph period '1y'.
- secondary-object => obj-name-or-id, optional
This object is valid only if the primary object is a quota user and should be a volume or a qtree.The primary and secondary object together represent a single quota object. If not specified, - its ignored only if the primary object is not a quota user. - if the primary object is a quota user then the ZAPI will fail with errors. Used in graphs like user-disk-space-used-vs-total, user-disk-space-used-percent.
Outputs
Terminate a graph list iteration and clean up any saved info.Inputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Returns items from a previous call to graph-list-info-iter-startInputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned.
Initiates a query for a list of graphs and its metadata like graph lines and sample information.Inputs
Outputs
- records => integer
Number indicating how many items are available for future retrieval with graph-list-info-iter-next. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store. Used in subsequent calls to graph-list-info-iter-next or graph-list-info-iter-end.
Add a member to a group.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDMEMBER - The proposed member object is not a groupable object type. The error message indicates which object.
- EMEMBERALREADYINGROUP - The proposed new member is already in the group.
- EMEMBERNAMEAMBIGUOUS - The proposed member name could refer to 2 or more objects.
- EOBJECTNOTFOUND - The proposed member name or ID was not found.
- EACCESSDENIED - The user does not have the capability to add members to the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
Outputs
Copy the group and all of its subgroups under a new parent group.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EGROUPEXISTS - An attempt was made to create a new group with the same name as an already existing group. Try again with a different name.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINVALIDGROUPNAME - The proposed group name was not valid.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EPARENTGROUPDOESNOTEXIST - The parent group name or ID was not found.
- EPARENTGROUPNAMEAMBIGUOUS - The specified parent group name could refer to 2 or more groups. Try again with a parent group ID or a fully qualified parent group name.
- EACCESSDENIED - The user does not have the capability to copy the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
- new-group-name => string, optional
Group will be copied under the new parent group with this name. Groups may contain any printable ASCII character except slash characters. The maximum name length is 64 characters. Group name cannot be "global" or "all" or fully numeric.
Outputs
Create a new group.
Error conditions: - EGROUPEXISTS - An attempt was made to create a new group with the same name as an already existing group. Try again with a different name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDGROUPNAME - The proposed group name was not valid.
- EPARENTGROUPDOESNOTEXIST - The parent group name or ID was not found.
- EPARENTGROUPNAMEAMBIGUOUS - The specified parent group name could refer to 2 or more groups. Try again with a parent group ID or a fully qualified parent group name.
- EACCESSDENIED - The user does not have the capability to create the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name => string
Name of the new group. Groups may contain any printable ASCII character except slash characters. The maximum name length is 64 characters. Group name cannot be "all" or "global" or fully numeric.
- parent-group-name-or-id => group-name-or-id, optional
Name or ID of the parent group. If this value is not specified, a new top-level group is created. If a group name is specified, it must be fully qualified.
Outputs
- group-id => integer
ID of the newly created group.
Range: [1..2^31-1]
Remove one member from a group.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EMEMBERNOTINGROUP - The object specified is not a member in the group.
- EOBJECTNOTFOUND - The object specified was not found.
- EACCESSDENIED - The user does not have the capability to delete members from the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
- member-name-or-id => object-name-or-id
Name or ID of a member to remove from the group.
Outputs
Destroy an existing group. Child groups are destroyed recursively. If there is any error, then no groups are destroyed.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EACCESSDENIED - The user does not have the capability to destroy the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group to be destroyed. If a group name is specified, it must be fully qualified.
Outputs
Get the options for a group. Option values that are not present indicate that the option is using the global default.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EACCESSDENIED - The user does not have the capability to read options of the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
Outputs
Get the status of the group
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EACCESSDENIED - The user does not have the capability to get the status of the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
Outputs
Checks if the object associated with the object-id input is a member of the group. This includes both direct and indirect membership. If a group id of 0 is passed, this will always return true as long as the object is a valid object.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EMEMBERNAMEAMBIGUOUS - The proposed member name could refer to 2 or more objects.
- EOBJECTNOTFOUND - The object specified was not found.
- EACCESSDENIED - The user does not have the capability to read from the database.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
Outputs
- is-member => boolean
Returns true if the object associated with the object-id is a direct or indirect member of the group.
The group-list-iter-* set of APIs are used to retrieve the members of the DFM global group. group-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the group-list-iter-next API for the particular tag is no longer necessary.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check group-list-iter-start. The group-list-iter-next API is used to iterate over the members of the group stored in the temporary store created by the group-list-iter-start API.
Error conditions: - EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDTAG - The iterator tag specified is not valid. Restart the iteration operation to obtain a new valid tag.
Inputs
- maximum => integer
The maximum number of entries to retrieve.
Range: [1..2^31-1]
- tag => string
Tag from a previous group-list-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by group-list-iter-start.
Outputs
- records => integer
The number of records actually returned as a result of invoking this API.
Range: [1..2^31-1]
The group-list-iter-* set of APIs are used to retrieve the list of DFM groups. By default, a group is listed if the user has DFM.Database.Read capability on the group or if the user has DFM.Database.Read capability on any subgroup of the group.
If rbac-operation is present in the input, then a group is listed, if the authenticated user has the requested capability on that group or if the user has the required capability on any of its sub-groups. In that case, has-privilege output will be false for the parent group and true for the sub-group.
The group-list-iter-start API is used to load the list of groups into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the groups in the temporary store.
If group-list-iter-start is invoked twice, then two distinct temporary stores are created.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - Specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EACCESSDENIED - The user does not have DFM.Database.Read or if rbac-operation is specified, the user does nat have the requested capability on the specified group.
- EDATABASEERROR - A database error occured.
- ENOTFOUNDOPERATION - If the input element rbac-operation is not a valid RBAC operation name.
Inputs
- group-name-or-id => group-name-or-id, optional
If specified, only the specified group and its immediate children are returned.
- list-subgroups => boolean, optional
If TRUE, lists all the subgroups recursively. If group-name-or-id is specified, then all its subgroups are listed recursively. If FALSE, only top level groups are listed. If group-name-or-id is specified, then the immediate children groups of the group are listed. Default value is false.
- rbac-operation => string, optional
Name of an RBAC operation. If specified, only return resource groups for which the authenticated administrator has the required capability and DFM.Database.Read. Default value is DFM.Database.Read. This can be up to 255 characters long. The parameter is of the form: Application.Category.TypeOfAccess. For example: "DFM.Database.Read" or "DFM.Database.Write".
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with group-list-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to group-list-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Move the group under a new parent group. The id of the group does not change after the move.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EGROUPEXISTS - An attempt was made to create a new group with the same name as an already existing group. Try again with a different name.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINVALIDGROUPNAME - The proposed group name was not valid.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EPARENTGROUPDOESNOTEXIST - The parent group name or ID was not found.
- EPARENTGROUPNAMEAMBIGUOUS - The specified parent group name could refer to 2 or more groups. Try again with a parent group ID or a fully qualified parent group name.
- EACCESSDENIED - The user does not have the capability to move the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
- new-group-name => string, optional
Group will be moved under the new parent group with this name. Groups may contain any printable ASCII character except slash characters. The maximum name length is 64 characters. Group name cannot be "global" or "all" or fully numeric.
- parent-group-name-or-id => group-name-or-id
Name or ID of the new parent group. If a group name is specified, it must be fully qualified.
Outputs
Change the name of a group.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EGROUPEXISTS - An attempt was made to create a new group with the same name as an already existing group. Try again with a different name.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDGROUPNAME - The proposed group name was not valid.
- EACCESSDENIED - The user does not have the capability to rename the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
- new-group-name => string
New name of the group. Groups may contain any printable ASCII character except slash characters. The maximum name length is 64 characters. Group name cannot be "all" or "global" or fully numeric.
Outputs
Change one or more options for a group. Only options that are specified will be updated. The remaining options will retain their current values. If there is any error, then no options are changed.
Error conditions: - EGROUPDOESNOTEXIST - The group name or ID was not found.
- EOBJECTAMBIGUOUS - The specified group name could refer to 2 or more groups. Try again with a group ID or a fully qualified group name.
- EINTERNALERROR - An error occurred while processing the request. Try again later.
- EINVALIDINPUT - Invalid value for one of the options was specified.
- EACCESSDENIED - The user does not have the capability to set options for the group.
- EDATABASEERROR - A database error occured.
Inputs
- group-name-or-id => group-name-or-id
Name or ID of a group. If a group name is specified, it must be fully qualified.
Outputs
See group-member-list-iter-start for more information. Frees up the resources used by previous call to group-member-list-iter-start.Inputs
- tag => string
Tag returned by the call to group-member-list-iter-start
Outputs
See group-member-list-iter-start for more information. The group-list-iter-next API is used to iterate over the members of the group stored in the temporary store created by the group-list-iter-start API. The DFM station, internally, maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.Inputs
- maximum => integer
The maximum number of records to be returned from the temporary store.
- tag => string
Tag from a previous group-member-list-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by group-member-list-iter-start.
Outputs
- records => integer
The number of entries actually returned as a result of invoking this API.
DFM has an object that is known as the "group" that contains other DFM objects. Group may also have subgroups. The group-member-list-* APIs are used to retrieve the members of particular groups. These APIs can be used to retrieve either all members or particular type of members of groups. These APIs don't return the subgroups. Use group-list-iter-start to get a list of subgroups. The group-member-list-iter-start API is used to load the group members into a temporary store. Subsequent group-member-list-iter-next invocations iterate over the records in the temporary store. The group-member-list-iter-end API is used to release the temporary store. If group-member-list-iter-start is invoked twice, then the DFM station will create two different temporary stores that can be accessed using the different tags. When this API is invoked without specifying any groups, type parameter must be specified. In that case, API lists all the objects of specified type that are known to DFM. Those objects may or may not be members of any group. If you specify groups when invoking this API, type parameter is optional. API lists all the objects (optionally, of the specified type) that have been directly added to the specified groups.Inputs
- groups => group[], optional
A list of group names. If no groups are specified, all the DFM objects of the specified type are listed. You must specify either groups or type or both.
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- type => string, optional
The type of the members to be listed. Possible values include Cluster, Filer, Agent, Cache, OSSV Host, vFiler, vserver, Volume, Qtree, Configuration, Initiator Group, Lun Path, FCP Target, FCP Initiator, SRM Path, Aggregate, Resource Pool, Dataset, App Object, Host Service. If not specified, all the group members are listed. You must specify either groups or type or both.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with group-member-list-iter-next.
- tag => string
Tag to be used in subsequent calls to group-member-list-iter-next.
Add new managed host to the DataFabric Manager. The host being added must be a storage system or an host agent. DFM figures out what type of host we're adding. If it's a storage system or NetCache or FC Switch, we add it to the database and set the appliance-id. If it's a host agent, we add the agent to the database and set the agent-id. On return, only one of the appliance-id or agent-id will be set.Inputs
- host-name-or-id => string
Name, dfm id, or IP address of the host to be added. Value can be a DFM * object name (maximum 255 characters), a fully qualified domain name (FQDN)(maximum 255 characters), or the ip address. If the value is a DFM object name for a host object with a deletion flag, the deletion flag is removed. Length: [0..255]
Outputs
Add a license to a host. Host must be a storage system. Host must already be present in DFM's database and root login and password for the host must be set in DFM. The DFM will check the list of licenses on the host and update the database when the following types of licenses are changed: - Snapvault primary (sv_ontap_pri)
- Snapvault secondary (sv_ontap_sec)
- Unix primary (sv_unix_pri)
- Linux primary (sv_linux_pri)
- Windows primary (sv_windows_pri)
- Snapmirror (snapmirror)
- Synchronous snapmirror (snapmirror_sync)
- Windows OFM primary (sv_windows_ofm_pri)
- Nearstore (nearstore_option)
- NFS (nfs)
- CIFS (cifs)
- iSCSI (iscsi)
- MultiStore (vfiler)
- FCP (fcp)
- ASIS (a_sis)
If the license is already in use by another host and is not a site license, then the ZAPI will apply the license to the host, and then return with error code ELICENSEINUSE. The ELICENSEINUSE error will not be returned if the optional parameter suppress-inuse-error is true. The ELICENSEINUSE error will not prevent the license from being applied to the host, since it is not the role of the DFM to prevent the user from installing duplicate licenses. The storage system must be running a minimum ONTAP version of 6.5.6.Inputs
Outputs
Add new managed ossv host to the DataFabric Manager. If there is an ossv agent running on the host, we add the ossv agent to the database as a snapvault primary and set the ossv-id.Inputs
- host-name-or-id => string
Name or IP address of the host to be added or the DFM object id of a previously deleted OSSV agent. When adding a new OSSV agent, the value can be a valid DFM object name (maximum 255 characters), a fully qualified domain name (FQDN)(maximum 255 characters), or the IPv4 address. When adding a previously deleted OSSV agent, the id must be the ID of the deleted OSSV agent, not the id of host agent on the same client system Length: [0..255]
Outputs
- host-address => string, optional
IP address for the host. Length: [0..39]
Start the OSSV service on the host agent using the ossv ZAPI. DFM must have valid credentials for the Host Agent. The Host Agent and OSSV Agent must be installed on the host. Valid only for Host Agents. DFM will wait up to the time allowed in the timeout argument to make sure the requested service state was reached. If the timeout is exceeded, we return ESERVICESTATEUNKNOWN.Inputs
- timeout => integer, optional
Number of seconds to wait for ossv service to start/stop. Range: [1..100]
Outputs
Stop the OSSV service on the host agent using the ossv ZAPI. DFM must have valid credentials for the Host Agent. The Host Agent and OSSV Agent must be installed on the host. Valid only for Host Agents. DFM will wait up to the time allowed in the timeout argument to make sure the requested service state was reached. If the timeout is exceeded, we return ESERVICESTATEUNKNOWN.Inputs
- agent-name-or-id => host-name-or-id
The name or id of the agent host to start/stop the ossv service on.
- timeout => integer, optional
Number of seconds to wait for ossv service to start/stop. Range: [1..100]
Outputs
Terminates a view list iteration and clean up any saved info.Inputs
- tag => string
Tag from a previous host-capability-list-iter-start call.
Outputs
Returns items from a previous call to host-capability-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous host-capability-list-iter-start-call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of allowed capabilities on host. This is applicable for hosts running ONTAP versions 7.0 and above.Inputs
- host-name-or-id => obj-name-or-id
Name or id of the host to list the capabilities for. Only storage systems and vFiler units are allowed.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with host-capability-list-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to host-capability-list-iter-next or host-capability-list-iter-end.
Create an NDMP user on the host, creating the user account if necessary and storing the host-encrypted password on dfm. If the user exists already, we generate the encrypted NDMP password for them on the storage system and store that in the database. If it is a new user on the storage system, we will create a new unencrypted password for the caller and use that to generate the encrypted NDMP password which we will then store in the database. If the user is root, we will just use root's unencrypted password as the NDMP password since the encryption requirement does not apply to the root user. New non-root users will be added to the "Backup Administrators" group. Valid only for storage systems.Inputs
- host-name-or-id => host-name-or-id
The host to create the NDMP user on.
Outputs
Deletes a host. A host cannot be deleted in the following cases: - Controllers that belong to a cluster cannot be deleted individually. In this case, ECANNOTDELETECLUSTERNODEHOST will be returned.
- Cannot delete agent if it is in use by ossv or has SRM paths. In this case, EHOSTINUSE will be returned. The agent can be forcefully deleted, if ignore-host-in-use is set to true.
A successful deletion of a host will have the following impacts: - Resource Pools and Resource Groups containing the host will be updated based on its removal.
- All monitored objects contained by the host will be removed.
- Monitoring of the host will be stopped.
- Datasets resources containing the host will be removed based on host removal. This may affect dataset conformance status.
- If deleting an agent that is used by ossv, ossv stop/start capability will be disabled.
- If deleting an agent that is used by SRM paths, SRM monitoring will be disabled.
Inputs
- host-name-or-id => obj-name-or-id
Name or id of a host. Host can be one of the following: filer, vfiler, cluster, vserver, agent, ossv, switch
- ignore-host-in-use => boolean, optional
Default is false. This flag is applicable only when the deleted host is of type agent. This flag controls whether an EHOSTINUSE error is returned to caller when the deleted agent is used by ossv or has SRM Paths. When this flag is set to true, agent is forcefully deleted regardless of its usage state.
Outputs
Adds a domain user on the host. This is applicable for hosts running ONTAP versions 7.0 and above.Inputs
Outputs
Terminates a view list iteration and clean up any saved info.Inputs
- tag => string
Tag from a previous host-domainuser-list-iter-start call.
Outputs
Returns items from a previous call to host-domainuser-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous host-domainuser-list-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of domain users on host(s). Domain users on host(s) that matches all filters will be returned. If no input is specified, all the domain users on all monitored storage systems/vFiler units will be returned.Inputs
- host-domainuser-name-or-id-or-sid => domainuser-name-or-id-or-sid, optional
Name or SID or id of the domain user on the host.
- object-name-or-id => obj-name-or-id, optional
Name or id of an object to list the domain users for. The allowed object types for this argument are: - Resource Group
- Host (only storage systems and vFiler units are allowed)
- verbose => boolean, optional
Default is false. If set to true, then the usergroups, roles, and allowed capabilities are placed into the host-domainuser-info element.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with host-domainuser-list-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to host-domainuser-list-iter-next or host-domainuser-list-iter-end.
Pushes a domain user to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the domain user is to be pushed contains usergroups similar to that of the domain user. Two usergroups are similar if they have same name and set of similar roles (roles with same name and same set of capabilities).Inputs
- host-domainuser-name-or-id-or-sid => domainuser-name-or-id-or-sid
Name or id or SID of the domain user on the host.
- host-name-or-id => obj-name-or-id
Name or id of the host to push the domain user on. Only storage systems and vFiler units are allowed.
Outputs
Removes a domain user from a usergroup or usergroups.Inputs
- host-domainuser-name-or-id-or-sid => domainuser-name-or-id-or-sid
Name or id or SID of domain user on the host.
- host-usergroup-names-or-ids => host-usergroup-name-or-id[]
List of ids or names of usergroups from which the domain user has to be removed.
Outputs
The DFM stores a set of global default values for selected attributes, which are used on all hosts. The administrator can override the values on a per-host basis. This api returns the default values for some attributes returned by host-list-info-iter-next. Default values vary according to the host type.Inputs
Outputs
Ends iteration to list hosts.Inputs
- tag => string
The tag from a previous host-list-info-iter-start.
Outputs
Get next few records in the iteration started by host-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
The tag returned by a previous host-list-info-iter-start.
Outputs
- records => integer, optional
The number of records actually returned. Range: [1..2^31-1].
Starts iteration to list hosts. The list of hosts can include: - Storage Systems
- vFiler units
- vserver
- Host Agents
- OSSV Agents
- Switches
Use the filtering criteria in this API to specify the list of hosts returned by host-list-info-iter-next. If no filtering criteria is specified, all non-deleted hosts will be returned by host-list-info-iter-next.Inputs
- expand-filer-to-vfilers => boolean, optional
If true and object-name-or-id is a storage system and host-types contains "vfiler" then all vFiler units belonging to the storage system are returned else it is ignored. Default value is false.
- host-types => host-type[], optional
Types of hosts to list. If the host type matches any of the types in this array of host types it will be included in the list, subject to other filtering criteria. If this list contains an unknown type the EBADHOSTTYPE error will be returned.
- include-is-available => boolean, optional
If true, the is-available status is calculated for each host which may make the call to this zapi take much longer.
- include-migration-info => boolean, optional
If true, return the migration information for the migrating vFiler units. Default false.
- is-direct-member-only => boolean, optional
If true, only return the hosts that are direct members of the specified resource group. Default value is false. This field is meaningful only if a resource group name or id is given for the object-name-or-id field.
- license-filter => license[], optional
Filter for listing storage systems based on services licensed. Appliable when host-type is "filer". Only those storage systems which have all the licenses installed as specified in input are listed.
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of an object to list hosts for. The allowed object types for this argument are: - Resource Group
- Resource Pool
- Dataset
- Storage Set
- Host
- Aggregate
- Volume
- Qtree
- OSSV Directory
- GenericAppObject
If object-name-or-id identifies a host, that single host will be returned. If object-name-or-id resolves to more than one hosts, all of them will be returned. If no object-name-or-id is provided, all hosts will be listed. Host Agents and OSSV agents may use a subdomain to identify themselves, unlike other host types. If the host-types element includes "agent" or "ossv" all subdomains beginning with the host name will be checked.
- query-host => boolean, optional
If true, query the host to update the database and refresh the values returned in the following elements by host-list-info-iter-next: - host-status
- is-agent-ossv-enabled
- is-snapvault-enabled
- is-ndmp-enabled
- is-snapmirror-enabled
- snapvault-access-specifier
- ndmp-access-specifier
- snapmirror-access-specifier
- host-communication-status
- host-credentials-status
- ndmp-communication-status
- ndmp-credentials-status
- ndmp-agent-status
- is-snapvault-primary
- is-snapvault-secondary
- is-snapmirror-host
- is-unix-primary
- is-linux-primary
- is-windows-primary
- is-windows-ofm-primary
- is-nearstore
The host-list-info-iter-next zapi defines which elements will be updated based on whether the host type supports it. Errors in communicating with or authenticating to the host will prevent updating all the elements that are supported for that host type.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with host-list-iter-next. Range: [0..2^31-1]
- tag => string
Tag to be used in subsequent calls to host-list-iter-next or host-list-iter-end.
Modify attributes stored in the DFM database of a host managed by the DFM.Inputs
- host-name-or-id => host-name-or-id
The host to be modified.
Outputs
Change the password on the Host Agent for the built in Host Agent management user "admin", using the Operating System credentials specified by os-username and os-password to authenticate the https POST request. If the operation succeeds, update the Host Agent password stored in the DFM database. Valid only for Host Agents.Inputs
- agent-name-or-id => host-name-or-id
The host to change the Host Agent password on.
- management-password => host-password, optional
New password for the Host Agent management API password. If this parameter is present, it is assumed to be encrypted using two way encryption. If this parameter is not present, the DFM will generate a password.
- os-password => host-password
root or administrative password used to authenticate https POST password change request. Encrypted using two way encryption.
- os-username => string
root or administrative user name used to authenticate https POST password change request.
Outputs
Creates a role on the host. This is applicable for hosts running ONTAP versions 7.0 and above.Inputs
- host-role => host-role-info
New role information. "host-role-name" and at least one allowed capability must be specified. Either host name or id on which the role is to be created must be provided. If both host name and host id are provided, then id takes precedence. Description is also allowed. All other fields are ignored.
Outputs
Deletes a role on the host.Inputs
Outputs
Terminates a view list iteration and clean up any saved info.Inputs
- tag => string
Tag from a previous host-role-list-iter-start call.
Outputs
Returns items from a previous call to host-role-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous host-role-list-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of roles on host(s). Roles on host(s) that match all filters will be returned. If no input is specified, all the roles on all monitored storage systems or vFiler units will be returned.Inputs
- host-role-name-or-id => access-object-name-or-id, optional
Name or id of a role on the host. If this is provided, the output contains only the information associated with this role.
- object-name-or-id => obj-name-or-id, optional
Name or id of an object to list the roles for. The allowed object types for this argument are: - Resource Group
- Host (allowed host types are storage system and vFiler units)
- verbose => boolean, optional
Default is false. If set to true, then the allowed capabilities are placed into the host-role-info element.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with host-role-list-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to host-role-list-iter-next or host-role-list-iter-end.
Modifies a role on the host.Inputs
- host-role => host-role-info
Role name or id must be provided. Host name or id is an optional parameter. If one or more capabilities and/or description is provided, the role is modified accordingly. If both name and id are provided, id takes precedence.
Outputs
Pushes a role to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the role is to be pushed supports all the capabilities present in the role.Inputs
- host-name-or-id => obj-name-or-id
Name or id of the host to push the role on. Only storage systems and vFiler units are allowed.
- host-role-name-or-id => access-object-name-or-id
Name or id of the role to be pushed on the host.
Outputs
Change the option on the storage system specified by host-option-name to the value specified by host-option-value. If the operation succeeds the following options will be stored in the DFM database and will be returned in the specified elements the next time host-list-info-iter-next is called. | host-option-name | host-list-info-iter-next element |
| ndmpd.enable | is-ndmp-enabled |
| ndmpd.access | ndmp-access-specifier |
| snapvault.enable | is-snapvault-enabled |
| snapvault.access | snapvault-access-specifier |
| snapmirror.enable | is-snapmirror-enabled |
| snapmirror.access | snapmirror-access-specifier |
If the name of the host option ends in ".access" and the value of the host option is the empty string, the value will be changed to "none". See na_options(1) for a list of option names and values. See na_protocolaccess(8) for access specifier syntax and usage. Valid only for storage systems.Inputs
- host-name-or-id => host-name-or-id
The host to set the option on.
Outputs
Creates a local user on the host. This is applicable for hosts running ONTAP versions 7.0 and above.Inputs
- host-user => host-user-info
New local user information. host-user-name, password and at least one usergroup must be provided. Usergroup can be specified in terms of name or usergroup id. Either host name or id on which the local user is to be created must be provided. Description, minimum password age, maximum password age are also allowed. All other fields are ignored. If both id and name are specified, then id takes precedence.
Outputs
Deletes a local user on the host.Inputs
Outputs
Terminates a view list iteration and clean up any saved info.Inputs
- tag => string
Tag from a previous host-user-list-iter-start call.
Outputs
Returns items from a previous call to host-user-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous host-user-list-iter- call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of local users on host(s). Local users on host(s) that matches all filters will be returned. If no input is specified, all the local users on all monitored storage systems or vFiler units will be returned.Inputs
- host-user-name-or-id => access-object-name-or-id, optional
Name or id of a local user.
- host-usergroup-name-or-id => usergroup-name-or-id, optional
Name of id of the user group. If specified only the users belonging to this group will be listed.
- object-name-or-id => obj-name-or-id, optional
Name or id of an object to list the local users for. The allowed object types for this argument are: - Resource Group
- Host (allowed host types are storage system and vFiler unit)
- verbose => boolean, optional
Default is false. If set to true, then the usergroups, roles, and allowed capabilities are placed into the host-user-info element.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with host-user-list-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to host-user-list-iter-next or host-user-list-iter-end.
Modifies local user on the host.Inputs
- host-user => host-user-info
Local user name or id must be provided. Host name or id is an optional parameter. If one or more usergroups(ids or names), description, minimum password age, and/or maximum password age are provided, the local user is modified accordingly. All other fields are ignored. If both id and name are specified, then id takes precedence.
Outputs
Modifies password of a local user on the host.Inputs
- host-name-or-id => obj-name-or-id, optional
Name or id of Storage System/vFiler unit on which the local user exists.
- host-user-name-or-id => access-object-name-or-id
Name or id of the local user. For storage systems running Data ONTAP version 7.0 or later, the user should have been discovered by DFM and thus host user id can be specified instead of host user name.
Outputs
Pushes a local user to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the local user is to be pushed contains usergroups similar to that of the local user. Two usergroups are similar if they have same name and set of similar roles (roles with same name and same set of capabilities).Inputs
- host-name-or-id => obj-name-or-id
Name or id of the host to push the user on. Only storage systems and vFiler units are allowed.
- host-user-name-or-id => access-object-name-or-id
Name or id of the local user on the host.
Outputs
Creates a usergroup on the host. This is applicable for hosts running ONTAP versions 7.0 and above.Inputs
- host-usergroup => host-usergroup-info
New usergroup information. "host-usergroup-name" and at least one role must be specified. Role can be specified in terms of role name or role id. Either host name or id on which the usergroup is to be created must be provided. Description is also allowed. All other fields are ignored. If both id and name are specified, then id takes precedence.
Outputs
Deletes a usergroup on the host.Inputs
- host-usergroup-name-or-id => usergroup-name-or-id
Name or id of the usergroup on the host.
Outputs
Terminates a view list iteration and clean up any saved info.Inputs
- tag => string
Tag from a previous host-usergroup-list-iter-start call.
Outputs
Returns items from a previous call to host-usergroup-list-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous host-group-list-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of usergroups on host(s). Usergroups on host(s) that match all filters will be returned. If no input is specified, all the usergroups on all monitored storage systems or vFiler units will be returned.Inputs
- host-usergroup-name-or-id => usergroup-name-or-id, optional
Name or id of a usergroup.
- object-name-or-id => obj-name-or-id, optional
Name or id of an object to list the usergroups for. The allowed object types for this argument are: - Resource Group
- Host (allowed host types are storage system and vFiler unit)
- verbose => boolean, optional
Default is false. If set to true, then the roles and allowed capabilities are placed into the host-usergroup-info element.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with host-group-list-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to host-usergroup-list-iter-next or host-usergroup-list-iter-end.
Modifies a usergroup on the host.Inputs
- host-usergroup => host-usergroup-info
Usergroup name or id must be provided. Host name or id is an optional parameter. If one or more roles(ids or names) and/or description is provided, existing roles and/or description are removed and updated with new ones. All other fields are ignored. If both id and name are specified, then id takes precedence.
- new-usergroup-name => usergroup-name, optional
New usergroup name for this usergroup. This is used to rename the usergroup specified in host-usergroup. If this value is invalid, host-usergroup-modify fails without changing anything. The value is optional, and if not provided, the group name will be unchanged.
Outputs
Pushes a usergroup to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the usergroup is to be pushed contains roles similar to that of the usergroup. Two roles are similar if they have the same name and same set of capabilities.Inputs
- host-name-or-id => obj-name-or-id
Name or id of the host to push the usergroup on. Only storage systems and vFiler units are allowed.
- host-usergroup-name-or-id => usergroup-name-or-id
Name or id of the usergroup to be pushed on the host.
Outputs
Authorize a newly registered Host Service. Authorizing a Host Service will allow Host Service to start communicating with DataFabric Manager server, until then Host Service will not be operational. On successful authorization a baseline discovery job is initiated on the Host Service to discover the virtual server inventory information managed by Host Service.
This API requires DFM.HostService.Authorize capability on the Host Service or at global level.
Inputs
Outputs
- job-id => integer, optional
Identifier of the of job created to discover the virtual inventory information managed by the Host Service. The job-id will be returned only if run-discovery is true. Range: [1..2^31-1]
Configure options in Host Service registry. Options include storage system and VCenter server credentials and DataFabric Manager server endpoints for communication.Inputs
- dfm-server-ip-address => string, optional
IP address of DataFabric Manager Server machine that the Host Service should use for communication, useful in scenarios where DataFabric Manager Server host is configured with multiple IP addresses. If not specified, all the AF_INET interfaces on the machine are enumerated and the first valid address is used.
- host-service-name-or-id => obj-name-or-id
Name or Id of Host Service registered with DataFabric Manager service on which the configuration needs to be performed.
- vcenter-ip-address => string, optional
IP address of the VCenter server host. If not specified, it is assumed that VCenter and Host Service are located on the same host and IP address of the Host Service is used.
- vcenter-password => string, optional
VCenter user password, this is required by SMVI plugin to communicate with VCenter Server. This field is applicable only if the Host Service has SMVI plugin.
- vcenter-username => string, optional
VCenter user name, this is required by SMVI plugin to communicate with VCenter Server. This field is applicable only if the Host Service has SMVI plugin.
Outputs
- job-id => integer
Id of job created to configure host service. Range: [1..2^31-1]
Start a discovery job on the Host Service. This will refresh the virtual inventory information cached in the DataFabric Manager database.Inputs
- host-service-name-or-id => obj-name-or-id
FQDN or IP address or Object Identifier of the Host Service.
- resource-name-or-id => obj-name-or-id, optional
If specified, the discovery operation is performed in the context of specified virtual infrastructure object i.e only the attributes of the resource and its relationship with other virtual infrastructure objects are fetched from the Host Service and updated. For example if the resource-name-or-id is set to a specific hypervisor, then only the information associated with the hypervisor and objects under it like the virtual machines hosted on it is retrieved and updated. Refer to resource-type typedef description for the list of virtual object types.
- resource-type => string, optional
This field is used to specify the type of the virtual infrastructure object specified in resource-name-or-id. This can be used to eliminate ambiguity in case there is a clash in the object name across different virtual infrastructure object types.
- track-progress => boolean, optional
If 'true', then a job-id is returned to track progress of discovery operation, useful when the discovery operation is initiated by a user. This field is set to false when invoked by Host Service when performing discovery in the background. Default value: true
Outputs
- job-id => integer, optional
Id of job created to discover resources, returned only if track-progress is 'true'. Range: [1..2^31-1]
Ends iteration to list registered Host ServicesInputs
- tag => string
Tag from a previous host-service-list-info-iter-start.
Outputs
Get the specified Host Service records after the call to host-service-list-info-iter-start.Inputs
- maximum => integer
Maximum number of records to be received. Range: [1..2^31-1]
- tag => string
Tag from a previous host-service-list-info--start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Start iteration to list Host Services registered with DataFabric Manager.Inputs
- include-deleted => boolean, optional
If present and true, Host Services marked as deleted in the database are also returned. A Host Service is marked as deleted when it is un-registered from DataFabric Manager. Default value : false.
Outputs
- records => integer
Number of records available in the list. Range: [1..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Set host service specific options in DataFabric Manager database.Inputs
- admin-port => integer, optional
Port number that administrative services of Host Service are bound to. Range: [1..2^31-1]
- host-service-name-or-id => obj-name-or-id
Name or Identifier of Host Service.
Outputs
Delete a host service package from DataFabric Manager.Inputs
Outputs
List host service packages in DataFabric Manager.Inputs
- package-name-or-id => obj-name-or-id, optional
Name or id of the package.
Outputs
Register a new Host Service with DataFabric Manager Server. Newly deployed Host Services need to be registered and authorized in DataFabric Manager Server, this enables management of virtual server infrastructure hosted on Data ONTAP systems using service catalogs. The registration process retrieves the SSL certificate presented by the Host Service, and sets the Host Service status to "Pending Authorization" state. On confirming the validity of the certificate host-service-authorize API must be invoked to make the Host Service operational.
Inputs
- admin-port => integer, optional
Admin port configured for the Host Service. DataFabric Manager tries to establish SSL connection on this port to fetch the Host Service SSL certificate. Default value : 8699
- mgmt-port => integer, optional
Management port of the host service agent. Default value : 8799
Outputs
Ends iteration to list storage system for a host service agent.Inputs
- tag => string
Tag from a previous host-service-storage-system-list-info-iter-start.
Outputs
Get list of storage systems after the call to host-service-storage-system-list-info-iter-start.Inputs
- maximum => integer
The index of the last record to be received Range: [1..2^31-1]
- tag => string
Tag from a previous call to host-service-storage-system-list-info-iter-start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
List storage systems configured in the Host Service registry.Inputs
- host-service-name-or-id => obj-name-or-id
Name or Id of the Host Service registered with DataFabric Manager service.
Outputs
- records => integer
Number of records available in the list Range: [1..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Un-Register a Host Service from DataFabric Manager Server. All the Virtual Server inventory objects will will be marked as deleted once the Host Service is unregistered. The API will fail (with EHS_UNREGISTER_ERROR) in case a Virtual Machine or Datastore (in VMware case) managed by the host service is a member of a dataset.Inputs
- force => boolean, optional
If true, the operation will proceed even if there are datasets configured with virtual machines managed by the host service. After un-registration the virtual machines will not show up as dataset members. The host service can be registered back with host-service-register API, the virtual machines would then show up again as dataset members. Default value is false.
- host-service-name-or-id => obj-name-or-id
Name or Object Id or IP address of the Host Service.
Outputs
Ends iteration of interfaces.Inputs
- tag => string
Tag from a previous ifc-list-info-iter-start.
Outputs
Get next set of records in the iteration started by call to ifc-list-info-iter-start.Inputs
- tag => string
Tag from a previous ifc-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Value of 0 records indicates that end of records.
Range: [1..2^31-1]
Start iteration of interfaces.Inputs
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => obj-name-or-id, optional
Name of Id of the following objects. If storage sytem name or Id is specified, only the interfaces discovered on the storage system are returned.
The name of the interface should be specified in : format. Ex: "toaster:e0a".
Outputs
- records => integer
Number of records fetched and stored for retrieval using ifc-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used for subsequent calls.
Add LDAP server to DFM.Inputs
Outputs
Delete LDAP server from DFM.Inputs
- ldap-server => ldap-server
IP address and port details of an LDAP server in DFM to be deleted.
Outputs
Returns list of LDAP servers known to DFM.Inputs
Outputs
API to destroy a LUN.Inputs
Outputs
- job-id => integer
Identifier of job started for the destroy request.
The lun-list-info-iter-* set of APIs are used to retrieve the list of luns. lun-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the lun-list-info-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check lun-list-info-iter-start. The lun-list-info-iter-next API is used to iterate over the members of the luns stored in the temporary store created by the lun-list-info-iter-start API.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous lun-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by lun-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
The lun-list-info-iter-* set of APIs are used to retrieve the list of luns in DFM. lun-list-info-iter-start returns the union of lun objects specified, intersected with rbac-operation. It loads the list of luns into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the luns in the temporary store. If lun-list-info-iter-start is invoked twice, then two distinct temporary stores are created.Inputs
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of an object to list luns for. The allowed object types for this argument are: - Resource Group
- Host
- Aggregate
- Volume
- Qtree
- Lun
- GenericAppObject
If object-name-or-id identifies a lun, that single lun will be returned. If object-name-or-id resolves to more than one lun, all of them will be returned. If no object-name-or-id is provided, all luns will be listed.
- rbac-operation => string, optional
Name of an RBAC operation. If specified, only return luns for which authenticated admin has the required capability. A capability is an operation/resource pair. The resource is a lun or a container with one or more luns in it. The possible values that can be specified for operation can be obtained by calling rbac-operation-info-list. If operation is not specified, then it defaults to DFM.Database.Read. For more information about operations, capabilities and user roles, see the RBAC APIs.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with lun-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to lun-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
API to resize a LUN.Inputs
- lun-name-or-id => obj-name-or-id
Name or identifier of LUN to be resized.
- new-size => integer
Specify the new size of the luns. The value is specified in bytes. Range: [1..2^44-1]
Outputs
- job-id => integer
Identifier of job started for the resize request.
Cancel the migration operation for the specified vFiler unit or dataset. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found.
- EINVALIDMIGRATIONSTATUS - When the migration status of the specified vFiler unit is such that migrate-cancel operation cannot be initiated.
Inputs
- dry-run => boolean, optional
Indicates whether a dry run needs to be done of all the tasks to be performed as part of migrate-cancel operation. Default value is false.
- object-name-or-id => obj-name-or-id
Full name or identifier of the vFiler unit or dataset whose migration operation has to be cancelled. The migration status of the vFiler unit or the vFiler unit attached to the specified dataset should be either "migrating" or "migrate_failed".
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- job-id => integer, optional
Identifier of the job started to cancel the migration operation. only returned if dry-run is false.
Change the state of a migration operation initiated for a vFiler unit or dataset. Only the following state changes are allowed through this API: - Change from "migrated" to "not_started": This is used to retain stale storage on the source hosting storage system and migrating a vFiler unit again.
- Change from "migrated_with_errors" to "migrated": This is used to change the state after going through the errors that occured after cutover and correcting them as necessary.
- Change from "rolledback_with_errors" to "rolledback": This is used to change the state after going through the errors that occured after cutover and correcting them as necessary. Error conditions:
EDATABASEERROR - A database error occurred while processing the request. EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset. EOBJECTAMBIGUOUS - When the specified object name is ambiguous. EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found. - EINVALIDMIGRATIONSTATUS - When the migration status change is not allowed. EINVALIDINPUT - Invalid input specified.
Inputs
- object-name-or-id => obj-name-or-id
Full name or identifier of the vFiler unit or dataset whose migration status has to be changed.
Outputs
Delete the stale storage associated with a migration operation from the source storage system. This will destroy all the volumes of the source vFiler unit after successful migration of the vFiler unit. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found.
- EINVALIDMIGRATIONSTATUS - When the migration status of the specified vFiler unit is such that migrate-cleanup operation cannot be initiated.
Inputs
- dry-run => boolean, optional
A dry-run flag that indicates to show a dry-run result of the volumes that will be destroyed as part of this cleanup operation from the source storage system. Default is false.
- object-name-or-id => obj-name-or-id
Full name or identifier of the vFiler unit or the dataset whose stale storage has to be cleaned up. The migration status of the vFiler unit or the vFiler unit attached to the dataset should be "migrated".
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- job-id => integer, optional
Identifier of the job started to cleanup storage after the migration operation. Returned only when dry-run is false.
Complete the migration operation by doing a cutover from the source vFiler unit to destination vFiler unit. As part of cutover operation the following will be done: A script if specified, will be run in pre mode. The actual cutover will be carried out so that the source vFiler unit is destroyed and data will be served from the destination vFiler unit. The script if specified, will be run in post mode after successful cutover. For all volumes of the vFiler unit, the protection relationships will be migrated to the new destination. All the backup versions will be modified so that they point appropriately to the newly created destination volumes. The migration status is changed to 'migrated' after a successful completion of migration. If migrate-complete fails to cutover to destination storage, then the migration status is changed to 'migrate-failed'. If the cutover to the destination storage succeeds during completion, but some of the subsequent steps like migrating the protection relationships, or copying the history data fails, then the status is changed to 'migrated-with-errors'. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found.
- EINVALIDMIGRATIONSTATUS - When the specified vFiler unit is not in "migrating" status.
- EONLINENOTPOSSIBLE - When online migration cannot be performed, but offline migration is still possible.
Inputs
- dry-run => boolean, optional
A dry-run flag that indicates to show a dry-run result of the migrate-complete operation. Default is false.
- max-cutover-time => integer, optional
The value provided will be in seconds and this value defines, the time by which non-disruptive migration should complete. Default value is 120 seconds. Range [120..1800].
- migrate-offline => boolean, optional
This flag is considered only when migration is started as online migration. When true, offline migration is performed. If false or not specified online migration is performed. This flag is ignored if the migration is not online. Default value is false.
- object-name-or-id => obj-name-or-id
Full name or identifier of the vFiler unit or the dataset for which the migration operation has to be completed. For a migrate-complete operation, the source vFiler unit should be given as input. The migration status of the vFiler unit or the vFiler unit attached to the dataset should be "migrating".
- persistent-migrated-routes => boolean, optional
If this option is set, then the migrated routes from the source will be made persistent on the destination storage system. This input is ignored, if the route-migration-mode is 'none'. Default value is true.
- run-dedupe-scan => boolean, optional
Indicates whether a full deduplication scan has to be run on the deduplication enabled volumes of the migrating vFiler unit on destination storage system after migration. This input is valid only for volumes in the vFiler unit which have deduplication turned on and ignored for other volumes. Default is true.
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- job-id => integer, optional
Identifier of the job started to carry out the complete operation. Returned only when dry-run flag is false.
Fix the migration status of a dataset. This API should be called if migration job was aborted abnormally, due to server shutdown or machine reboot etc. It fixes the database entries and migration status of the dataset. It can be called only if the last migration job run on the dataset terminated abnormally. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found.
- EINVALIDINPUT - Invalid input specified.
Inputs
- dataset-name-or-id => obj-name-or-id, optional
Full name or identifier of the dataset for which migration job was terminated abnormally. This field is deprecated now and object-name-or-id field should be used. This field will be ignored if object-name-or-id is specified.
- object-name-or-id => obj-name-or-id, optional
Name of Id of the dataset/vfiler for which migration job was terminated abnormally.
Outputs
- job-id => integer
Identifier of the job started to fix the previous crashed/stopped vFiler unit migration.
Rollback the previous migrated vFiler unit from the destination vFiler unit to it's original source vFiler unit. As part of rollback operation the following will be done: A script if specified, will be run in pre mode. Rollback will be carried out so that the destination vFiler unit is destroyed and data will be served from the source vFiler unit. The script if specified, will be run in post mode after successful cutover. For all volumes of the vFiler unit, the protection relationships will be migrated to the new destination. All the backup versions will be modified so that they point appropriately to the newly created source volumes. The migration status is changed to 'rolled_back' after a successful rollback. If migrate-rollback fails to cutover to source storage, then the migration status will be changed to 'migrated'. If the cutover to the source storage succeeds during rollback, but some of the subsequent steps like migrating the protection relationships, or copying the history data fails, then the status is changed to 'rolled_back_with_errors'. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found.
- EONLINENOTPOSSIBLE - When online migration cannot be performed, but offline migration is still possible.
Inputs
- dry-run => boolean, optional
Indicates whether a dry run needs to be done of all the tasks to be performed as part of migrate-rollback operation. Default value is false.
- max-cutover-time => integer, optional
The value provided will be in seconds and this value defines, the time by which non-disruptive migration should complete. Default value is 120 seconds. Range [120..1800].
- migrate-offline => boolean, optional
This flag is considered only when cutover was done as online migration. When true, offline migration is performed. If false or not specified online migration is performed. The flag is ignored if the migration is not online. Default value is false.
- object-name-or-id => obj-name-or-id
Full name or identifier of the vFiler unit or dataset whose migration operation has to be rolled back. The migration status of the vFiler unit or the vFiler unit attached to the specified dataset should be in "migrated".
- persistent-migrated-routes => boolean, optional
If this option is set, then the migrated routes from the source will be made persistent on the destination storage system. This input is ignored, if the route-migration-mode is 'none'. Default value is true.
- route-migration-mode => route-migration-mode, optional
Type of routes that needs to be migrated as part of cutover.
- run-dedupe-scan => boolean, optional
Indicates whether a full deduplication scan has to be run on the deduplication enabled volumes of the migrating vFiler unit on source storage system after migration. This input is valid only for volumes in the vFiler unit which have deduplication turned on and ignored for other volumes. Default is true.
- script-path => string, optional
Path to a script that will be run in pre and post mode just before and after rollback operation.
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- job-id => integer, optional
Identifier of the job started to rollback the migration operation. Returned only when dry-run is false. Range [1..2^31-1].
Start the migration operation for a dataset or a vFiler unit. This initiates a baseline transfer for all the volumes in the source vFiler unit and changes the migration status to 'migrating'. When a dataset is given as input then the following conditions should hold good: - A vFiler unit should be attached to the primary node of the dataset.
- All the volumes of the vFiler unit with the exception of the root storage should belong to this dataset.
- For offline migration, the migrating vFiler unit may contain qtree as it's root storage.
- For online migration, the migrating vFiler unit should contain a volume as it's root storage.
If any of these conditions are not met, then EMIGRATENOTSUPPORTED is returned. If perform-cutover is set to true, then migrate-complete will be done after migrate-start. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset. If resource-name-or-id refers to a resource pool, user should have DFM.ResourcePool.Provision on it. If resource-name-or-id refers to a storage system, user should have DFM.Resource.Control on it. If provisioning-policy-name-or-id is specified, user should have DFM.Policy.Read on it.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit, dataset, destination storage system or provisioning policy is not found.
- EINVALIDMEMBERTYPE - When the specified filer is in c-mode.
- EINVALIDINPUT - When invalid inputs are provided for IP Address, Netmask.
- EINVALIDMIGRATIONSTATUS - When the specified vFiler unit is already in "migrating" status or in a state where migration cannot be started.
- EMIGRATENOTSUPPORTED - When the dataset or source vFiler unit cannot be migrated due to various reasons like invalid vFiler status, ONTAP version.
- EMIGRATEDESTINATIONSELECTIONFAILED - When a suitable storage system is not found as a destination for migration.
Inputs
- bandwidth-throttle => integer, optional
Specify the bandwidth throttle(in kbps) to be used for baseline transfer and subsequent SnapMirror updates. This element is valid only for online-migration. Default is 0 which means unlimited bandwidth for the transfers. Range: [1..2^31-1]
- dry-run => boolean, optional
Indicates whether a dry run needs to be done of all the tasks to be performed as part of migrate-start operation. Default value is false.
- ip-bindings => ip-binding-info[], optional
IP Address to interface binding information for the destination vFiler unit. If this element is not present, then the interface configured as the "defaultVFilerInterface" for the selected destination storage system is used to bind IP Addresses of the source vFiler unit at the destination.
- object-name-or-id => obj-name-or-id
Full name or identifier of the vFiler unit or the dataset which has to be migrated. If the input element is a dataset, then a vFiler unit should be attached to the dataset and all the volumes of the vFiler unit should be members of the primary node of the dataset with the exception of the root storage. The vFiler unit will be migrated to a new storage system.
- online-migration => boolean, optional
Indicates, that cutover has to be performed non-disruptively. This input is considered only when a vFiler unit is migrated and ignored when object-name-or-id refers to a dataset. For datasets, the migration will be done in an online or offline mode depending on 'online-migration' attribute set on the dataset Default value is false.
- perform-cutover => boolean, optional
If this option is set to true, then migrate-complete will be done after migrate-start. If cutover fails because of some reason, then migrate status will be set to 'Migrating' and user can issue the migrate-complete zapi to perform the cutover. Default value is false.
- persistent-migrated-routes => boolean, optional
If this option is set, then the migrated routes from the source will be made persistent on the destination storage system. This input is ignored, if the route-migration-mode is 'none'. This option is valid if perform-cutover is set to true. Default value is true.
- provisioning-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the provisioning policy to be used to provision destination volumes during migration. This input is valid only when object-name-or-id refers to a dataset. Default behavior is to use the current provisioning policy associated with the dataset or if not associated, to use the same volume configuration as that of the source vFiler unit.
- resource-name-or-id => obj-name-or-id, optional
Name or identifier of the destination storage system or resource pool to which the vFiler unit is to be migrated. If object-name-or-id refers to a dataset, this storage system should be a member of a resource pool on which the user has required capabilities. If object-name-or-id refers to a vFiler unit, this can refer to any destination storage system on which the user has required capabilities. The storage system should be running ONTAP version same or above the source storage system.
- route-migration-mode => route-migration-mode, optional
Type of routes that needs to be migrated as part of cutover. This option is valid if perform-cutover is set to true.
- run-dedupe-scan => boolean, optional
Indicates whether a full deduplication scan has to be run on the deduplication enabled volumes of the migrating vFiler unit on destination storage system after migration. This input is valid only if perform-cutover is set to TRUE and that too for volumes in the vFiler unit which have deduplication turned on and ignored for other volumes. Default is true for IC.X.
- script-path => string, optional
Path to a script that will be run in pre and post mode just before and after cutover operation. This is applicable only when perform-cutover is set to true.
- volume-aggregate-map => volume-aggregate-pair[], optional
A list of (volume, aggregate) pairs. The volume is the vFiler volume being migrated. The aggregate is the aggregate on the destination storage system where this volume should be migrated to. This input is valid only when the object-name-or-id is a vFiler name or id. If both volume-aggregate-map and resource-name-or-id inputs are specified, an error will be reported.
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- job-id => integer, optional
The identifier of the job started to carry out the migrate-start operation. This field is returned only when dry-run is false.
Update all the SnapMirror relationships of a vFiler unit for which migration operation has been initiated. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to initiate the migrate operation. If object-name-or-id refers to a dataset, user should have DFM.Dataset.Write on the datset. If object-name-or-id refers to a vFiler unit, user should have DFM.Resource.Control on the dataset.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler unit or dataset is not found.
- EINVALIDMIGRATIONSTATUS - When the migration status of the vFiler unit is such that migrate-update cannot be called.
Inputs
- object-name-or-id => obj-name-or-id
Full name or identifier of the dataset or vFiler unit whose migration status is "migrating" and whose SnapMirror relationships are to be updated.
Outputs
- job-id => integer
Identifier of the job started to update the SnapMirror relationships for a vFiler unit migration.
Migrate one or more volumes from one aggregate to another aggregate on the same or different storage system. Currently this API works only for secondary volumes i.e destinations of a Volume SnapMirror or Qtree SnapMirror relationships or if the volumes are SnapVault secondaries. In addition to the above, the following rules must be satisfied to be able to migrate volumes using this API :- - The volume(s) should not have child clones (FlexClones).
- The volume(s) should not have NFS exports, CIFS shares or contain LUNs mapped to storage clients.
- The volumes(s) should be part of a dataset (i.e either imported into datasets, or provisioned by Protection Manager.) so that incoming and outgoing data protection relationships to and from the volumes are managed by Protection Manager.
If the destination aggregate is not specified, then the system will automatically select an aggregate, based on the type of the volume, space requirements, provisioning and protection policy configuration associated with the dataset.Inputs
- dry-run => boolean, optional
Indicates whether a dry run needs to be done of all the tasks to be executed as part of migrating volumes. Default value is false.
Outputs
Get the list of interfaces on a storage system.Inputs
- timeout => integer, optional
Number of seconds to wait for a response before giving up. If omitted, the default value is 5 seconds.
Range: [0..2^31-1]
Outputs
Adds a network in DFM to discover.Inputs
- prefix-length => prefix-length
The routing prefix length of the network. Useful for calculation of subnet mask of a network.
Outputs
Delete a network from DFM so that discovery is disabled for that network. Either network-id or network-address has to be provided. If both are specified then an error(EINVALIDINPUT) will be thrown.Inputs
- network-address => network-address, optional
IP address of network.
Outputs
Returns list of networks added to DFM for discovery.Inputs
- network-id => network-id, optional
Unique identifier representing a network in DFM. If no input is provided, then information about all the networks added to DFM for discovery will be returned.
Outputs
Modify prefix length of a network in DFM, so that host discovery in this networks should happen based on new subnet mask derived from new value of prefix length.Inputs
- network-id => network-id
Unique identifier of the network in DFM.
- prefix-length => prefix-length
New routing prefix length of network.
Outputs
Terminate a view list iteration and clean up any saved info by a previous call to perf-assoc-view-list-iter-startInputs
- tag => string
Tag from a previous perf-assoc-view-list-iter-start.
Outputs
Returns objects from a previous call to perf-assoc-view-list-iter-startInputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous perf-view-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of performance views.Inputs
- assoc-obj-type => perf-assoc-obj-type, optional
If specified, only views with a matching associated object type will be returned. This is ignored if object-name-or-id is specified. Otherwise, all performance views will be returned (subject to the other optional parameters).
- include-empty-views => boolean, optional
If false, the following will be removed from the output views: - lines without any data source
- charts without any lines
- views without any charts
By default, this is false.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of an object to list views for. If not specified, all views will be returned (subject to the other optional parameters). The allowed object types for this argument are - Resource Group
- Dataset
- Resource Pool
- Host
- vFiler
- Aggregate
- Volume
- Qtree
- Lun
- verbose => boolean, optional
If this is set to false, only a few fields for each view is returned instead of the entire metadata Default is true The following fields will be returned for each view - view-name
- view-type
- assoc-obj-type
- view-type => view-type, optional
Indicates the type of views to be returned. If not specified, all types of views will be returned (subject to the other optional parameters).
Outputs
- records => integer
Number indicating how many items are available for future retrieval via calls to perf-view-list-iter-next
- tag => string
Tag to be used in subsequent calls to perf-view-list-iter-next or perf-view-list-iter-end
Ends an iteration started by perf-client-stats-list-info-iter-startInputs
- tag => string
The tag from a previous call to perf-client-stats-list-info-iter-start
Outputs
Returns the per-client statistics loaded in a previous call to perf-client-stats-list-info-iter-startInputs
- maximum => integer, optional
The number of records to retrieve in this iteration. If not specified, a default value of 20 is assumed. Range: [ 1 .. 2^31 - 1 ]
- tag => string
The tag from a previous call to perf-client-stats-list-info-iter-start
Outputs
- records => integer
The number of records returned. Range: [ 0 .. 2^31 - 1 ]
Iterates over the historical data of stored client statistics for storage systems. If no input filters are specified, all the available collections of statistics will be returned.Inputs
- end-time => timestamp, optional
If specified, only client statistics retrieved at or before this time will be returned. The time is specified in seconds from Epoch.
- last-only => boolean, optional
A convenience input to fetch only the last recorded collection for all hosts, if object-name-or-id is not specified, or a specific host if object-name-or-id is specified. If specified, start-time and end-time will be ignored, and this cannot be specified in conjunction with stat-id.
- object-name-or-id => obj-name-or-id, optional
Specifies the name or id of the object for which historical client statistics are to be retrieved. If not specified, the statistics will be returned for all available objects. The object must be a storage system.
- start-time => timestamp, optional
If specified, only client statistics retrieved at or after this time will be returned. The time is specified in seconds from Epoch.
- stat-id => stat-id, optional
If specified, only the collection of statistics with this id will be returned and all other inputs will be ignored.
Outputs
- records => integer
The number of records retrieved. Range: [ 0 .. 2^31 - 1 ]
- tag => string
The tag to be used in subsequent calls to perf-client-stats-list-info-iter-next or perf-client-stats-list-info-iter-end.
Remove collected client stats from the database. If no input is specified, all collections for hosts on which the user can perform the DFM.Database.Write operation will be purged.Inputs
- end-time => timestamp, optional
If specified, all statistics collection collected before this time will be removed. The timestamp must be specified in seconds from Epoch.
- object-name-or-id => obj-name-or-id, optional
Specifies the name or id, in the DFM database, of the object for which statistics collections should be purged. The object must be a storage system.
- start-time => timestamp, optional
If specified, all statistics collection collected after this time will be removed. The timestamp must be specified in seconds from Epoch.
- stat-id => stat-id, optional
Specifies the id, in the DFM database, of the collection to be removed. If specified, all other inputs will be ignored. The user must have the capability to perform the DFM.Database.Write operation on the storage system for which the collection with this id was collected.
Outputs
Collect per-client NFS and CIFS operation statistics for a storage system. The collection of such statistics will be enabled on all vFiler units on the storage system for a short period of time, and the collected values will then be summarized and returned for the storage system as a whole. This ZAPI is synchronous and will not return an error even if there are errors collecting per-client statistics from some or all vFiler units on the storage system.Inputs
- collection-period => integer, optional
Specifies, in seconds, the period of time for which the collection of statistics should be enabled on the filer. If not specified, a default value of 15 will be assumed. The API will be blocked for at least this period of time. In case of heavily loaded storage systems, the duration of blocking may be significantly higher. Range: [15 .. 60]
- object-name-or-id => obj-name-or-id
The name or id of the object on which the collection of per-client statistics is to be enabled. The object must be a storage system.
Outputs
- stat-id => stat-id
Returns the id, in the DFM database, of the collection of statistics stored.
Propagates source host's current data collection settings to destination hosts. When counter configuration is copied to destination hosts, even if it fails for one host, it proceeds further with the next available host in the destination list. Privilege required is DFM.Database.Write.Inputs
- preview => boolean, optional
If this flag is true, it performs a dry run before actually committing the changes. It returns a list of counters, host details in case of any discrepancies between source, destination hosts. By default true.
Outputs
- copy-result => destination-host-info[]
Result of copy operation. This element will be included irrespective of preview flag settings. On successful copy operation 'copy-result' will include only 'host-name, host-id' elements. In case of failure the output includes additional element 'error-reason'.
Creates a counter group. Global DFM.PerfView.Write is required to create historical counter group. To create a real-time counter group Global DFM.PerfView.RealTimeRead is required.Inputs
Outputs
Destroys a counter group. Privilege to destroy a historical counter group is Global DFM.PerfView.Delete. And to destroy a real-time counter group, Global DFM.PerfView.RealTimeRead is required.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
Outputs
Retrieve a set of data for a set of data sources from a specific counter group. The data is extracted for the given time interval bounded by start-time and end-time. This API is suitable for extracting the data for a single line in a chart (graph). Privilege required is DFM.Database.Read. For viewing real-time data Global DFM.PerfView.RealTimeRead is also required.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group-name => string
Counter group name from which we want to fetch data.
- end-time => integer, optional
Time stamp marking the end of requested data. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. If unspecified, server should end with the latest available data.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
- sample-rate => integer, optional
The desired interval between samples, in seconds. If the available samples have higher resolution than the one specified, multiple samples are consolidated into one sample to achieve the desired resolution, using the specified time-consolidation-method. If unspecified, the data returned will be at the sample-rate specified upon creation of the counter group.
- start-time => integer, optional
Time stamp marking the beginning of requested data. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. If unspecified, server should begin with the earliest available data.
- time-consolidation-method => string, optional
A function to apply across the data to achieve the desired time resolution, if the requested sample-rate is different from the actual sample-rate of the data. The values can be average, min, max, or last. The default is last.
Outputs
- counter-data => string
The retrieved data - possibly consolidated, but just a single array. The format is a series of comma-separated timestamp:value pairs. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. The values may have optional decimal extensions, for example 1064439599:127,1064439600:98.6,1064439601:12
- unit => string, optional
Unit of counter-data. This element will not be present if the unit is not known. Some possible values are per_sec, percent, b_per_sec, kb_per_sec, msecs, usecs. Maximum Length: 32 characters.
Retrieve a list of top-n data sources for a counter. Privilege required is read.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group-name => string
Counter group name from which we want to fetch data.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
Outputs
Retrieve one or more counter groups. Privilege required is read.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards. Either appliance-name-or-id or object-name-or-id should be specified, but not both. object-name-or-id is preferred as usage of appliance-name-or-id, group-name-or-id is deprecated.
- counter-info-only => boolean, optional
If true, the output element perf-counter-group, includes only the list of enabled counters of the host with object type derived from the counter group name. In this case data-sources will not be returned. By default false.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id or object-name-or-id is specified. This field defaults to global group if appliance-name-or-id or object-name-or-id is not specified. object-name-or-id is preferred as usage of group-name-or-id is deprecated.
- include-calculated-stats => boolean, optional
This option is valid only if either 'counter-info-only' or 'disabled-counter-info-only' is set to TRUE. If this flag is set to TRUE, the counter information will include calculated stats counters otherwise by default it will exclude those counters.
- object-name-or-id => obj-name-or-id, optional
If specified, output contains the counter-info element with list of enabled counters belonging to supplied counter-group-name. If counter-group-name is not set then output includes counters information for all counter groups of the hosting filer. Either appliance-name-or-id or object-name-or-id should be specified, but not both. object-name-or-id is preferred as usage of appliance-name-or-id, group-name-or-id is deprecated.
Outputs
Terminate a counter group list iteration and clean up any saved info. Privilege required is read.Inputs
- tag => string
The tag from a previous perf-counter-group-list-iter- call.
Outputs
Returns items from a previous call to perf-counter-group-list-iter-start. Privilege required is read.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
The tag from a previous perf-counter-group-list-iter- call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of performance counter group names. Privilege required is read.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group-name => string, optional
Counter group name to retrieve. If unspecified, all counter groups will be retrieved.
- custom-only => boolean, optional
if TRUE, only custom groups will be retrieved, by default false
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
- meta-data-only => boolean, optional
If TRUE, do not include any perf-data-sources with each perf-counter-groups.
Outputs
Modify an existing counter group. User can modify the sample-interval, sample-buffer and the counter set of an existing counter group at the individual host level. Modifying a counter set of a counter group means user can selectively enable/disable counters according to the requirement of collecting data. If a counter is disabled data will not be collected for that counter and this gives the flexibility of controlling load on a storage system. This zapi will only enable the counters mentioned in the 'data-sources' part of the input parameter 'perf-counter-group' and rest of the counters of the counter group will be automatically disabled. User can also disable all the counters of a counter group by setting the flag 'is-disable-all' in the input parameter 'perf-counter-group'. Enabling/disabling counters is not allowed for calculated stats counters in default counter groups. Privilege required is DFM.Database.Write.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group => perf-counter-group
The new counter group information. Note that the new counter group name may be different from the old counter group name. If different, the new counter group name must not already exist.
- counter-group-name => string
The name of the counter group to modify.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
Outputs
Start data collection for one counter group. Privilege required is Global DFM.PerfView.Write for normal views and Global DFM.PerfView.RealTimeRead for real-time views.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group-name => string
Counter group name to start data collection for. This must not be a default counter group.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
Outputs
Stop data collection for one counter group. Privilege required is Global DFM.PerfView.Write for normal views and Global DFM.PerfView.RealTimeRead for real-time views.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group-name => string
Counter group name to stop data collection for. This must not be a default counter group.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
Outputs
This API shall troubleshoot the DFM object for performance related issues and provides the recommendations to help resolve them.Inputs
- duration => integer, optional
The recent time in seconds for which the data has to be analyzed Either duration or start-time/end-time has be specified but not both. An error will be returned if at least one of them is not supplied.
- end-time => integer, optional
Time stamp marking the end of diagnosis window The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. Range: [0..(2^32)-1] Either duration or start-time/end-time has to be specified but not both. An error will be returned if at least one of them is not supplied.
- object-name-or-id => obj-name-or-id
object-name or id of the following objects that needs troubleshooting.
- start-time => integer, optional
Time stamp marking the beginning of diagnosis window The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. Range: [0..(2^32)-1] Either duration or start-time/end-time has to be specified but not both. An error will be returned if at least one of them is not supplied.
Outputs
Disables DFM performance advisor data collection.Inputs
- is-async => boolean, optional
Returns immediately without waiting for DFM performance advisor to disable the data collection completely (when set to true). By default this API is synchronous.
- is-persistent => boolean, optional
If true the disabling of Performance Advisor is saved in the database. If false, the disbling is temporary and the setting is not saved in the database. If the server restarts the data collection begins again.
Outputs
Disables any modification to DFM performance advisor views, counter groups and object instances.Inputs
Outputs
Enables DFM performance advisor data collection.Inputs
Outputs
Enables modifications to DFM performance advisor views, counter groups and object instances.Inputs
Outputs
Retrieve data for all the related objects for the given object and the specified performance counters.Inputs
- direction => string, optional
This field is used only when duration is specified It is used to derive the time period for which data is to be retrieved. If the value is set to 'forward', then the starting time is the time stamp of the oldest data. The ending time is the sum of time stamp of the oldest data and the duration. If the value is set to 'backward', then the starting time is the difference between time stamp of the most recent data and the duration. The ending time is the time stamp of the most recent data. If the value is set to 'backward-from-now', then the starting time is the difference between current time and the duration. The ending time is the current time. Default value is 'backward'
- duration => integer, optional
The duration for which the data is to be returned, in seconds. The time period is calculated based on the direction flag. Duration should not be specified when either start-time or end-time is specified. If specfied, EINVALIDINPUTERROR error will be returned. Range: [0..(2^32)-1]
- end-time => integer, optional
Time stamp marking the end of requested data. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. If unspecified, server should end with the latest available data. Range: [0..(2^32)-1]
- number-samples => integer, optional
The total number of samples to be returned. The sample-rate will be calculated based on the start-time and end-time. If the available samples have higher resolution than the one specified, multiple samples are consolidated into one sample to achieve the desired resolution, using the specified time-consolidation-method. If unspecified, the number of samples returned will be based on the sample-rate specified upon creation of the view. Either sample-rate or number-samples should be specified, but not both. Range: [1..(2^32)-1]
- sample-rate => integer, optional
The desired interval between samples, in seconds. If the available samples have higher resolution than the one specified, multiple samples are consolidated into one sample to achieve the desired resolution, using the specified time-consolidation-method. If unspecified, the data returned will be at the sample-rate specified in the counter group that the counter is present. Range: [1..(2^32)-1].
- start-time => integer, optional
Time stamp marking the beginning of requested data. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. If unspecified, server should begin with the earliest available data. Range: [0..(2^32)-1]
- time-consolidation-method => string, optional
A function to apply across the data to achieve the desired time resolution, if the requested sample-rate is different from the actual sample-rate of the data. Possible values : - 'average'
- 'min'
- 'max'
- 'last' Default is 'last'
- time-filter => time-filter, optional
Time based filter to reading data. This filter will be applied on the range derived from the start-time/end-time/duration/direction fields. This field is used only for metrics calculations, if metrics field is not specified in the input this field will be ignored.
Outputs
Find out the Operations Manager reports, Performance Advisor custom views, Performance Advisor threshold templates and Performance Advisor thresholds which depend on the specified performance counters.Inputs
- host-name-or-id => host-name-or-id, optional
The host name or id. If this parameter is specified, custom views attached to object instances will be returned in addition to the custom views attached with object types.
Outputs
Given an array of performance objects and counters, the API will return a list of all counters that are not part of any view.Inputs
Outputs
Gets the default view for an object.Inputs
- object-name-or-id => obj-name-or-id, optional
Specifies the object for which the view is to be retrieved. If not present, then Global Group is assumed.
Outputs
- view-name => string
The view-name which is to be displayed by default for the object. Maximum length is 255 characters
Returns the status of DFM Performance Advisor.Inputs
Outputs
Retrieve information about a performance object's counters. (No iterator-based equivalent exists for this API because the number of object counters is fewer than two hundred in number.) Privilege required is read.Inputs
- appliance-name-or-id => string, optional
The name or unique ID for the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
- include-calculated-stats => boolean, optional
If set to false the output will exclude the calculated stats counters. By default the value is true i.e will always include calculated stats.
- object-name => string
Name of the performance object to retrieve counters for. Object names are broad classes of system components and protocols, like NFS, VOLUME, DISK, ...
Outputs
Retrieve information about a performance object's dependent counters. Privilege required is read.Inputs
- appliance-name-or-id => string, optional
The name or unique ID for the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
- object-name => string
Name of the performance object to retrieve dependent counters. Object names are broad classes of system components and protocols, like NFS, VOLUME, DISK, ...
Outputs
Terminate an instance list iteration and clean up any saved info. Privilege required is read.Inputs
- tag => string
The tag from a previous perf-object-instance-list-iter- call.
Outputs
Returns items from a previous call to perf-object-instance-list-iter-start Privilege required is read.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
The tag from a previous perf-object-instance-list-iter- call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of performance object instance names. Privilege required is read.Inputs
- appliance-name-or-id => string
Name (or unique ID) of the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
- object-name => string
Name of the performance object to retrieve instances for. Object names are broad classes of system components and protocols, like NFS, VOLUME, DISK, ...
Outputs
- records => integer
Number indicating how many items are avialble for future retrieval with perf-object-instance-list-iter-next.
- tag => string
Tag to be used in subsequent calls to perf-object-instance-list-iter-next or perf-object-instance-list-iter-end.
Get a list of performance objects. (No iterator-based equivalent exists for this API because the number of performance objects is fewer than twenty in number.) Privilege required is read.Inputs
- appliance-name-or-id => string, optional
Name (or unique ID) of the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
Outputs
Sets the default view for an object.Inputs
- object-name-or-id => obj-name-or-id, optional
Specifies the object for which the view is to be set. If not specfied, then the Global Group is assumed.
- view-name => string
The view-name which is to be displayed by default for the object. Maximum length is 255.
Outputs
Returns the DFM performance advisor statusInputs
- host-name-or-id => string
DFM name or id of the host
Outputs
Sets threshold values on one or more objects based on a performance counter. Privilege required is DFM.Database.Write. Threshold will be set only if the User has DFM.Database.Write permission over the object specified.Inputs
Outputs
- threshold-id => threshold-id
If the threshold does not exist and is being created for the first time, then this parameter is present in the output. If a threshold already exists, then an error is returned instead of this parameter.
Creates one threshold composed of one or more counters, and optionally applies it to an object. The user must have the capability to perform the DFM.Database.Write operation on the object on which the threshold is applied.Inputs
- object-name-or-id => obj-name-or-id
Specifies the object on which the threshold is to be applied. When applying a threshold to an object, in case of container objects (like groups) or parent objects (like storage systems), if a counter is specified on a child object, the threshold applies to all such children. For example, if aggr:cp_reads is specified as one of the counters in the threshold and there is no counter of an object which is a child of an aggregate (viz. volume or qtree), and the object-name-or-id is that of a storage system, the threshold applies to all present and future aggregates on the storage system.
Outputs
- threshold-id => threshold-id
Specifies the id of the threshold that is created.
Removes threshold set on a counter. The user must have the capability to perform the DFM.Database.Delete operation on the object on which the threshold is directly applied. This ZAPI cannot be used to delete template thresholds, and perf-threshold-template-modify should be used for that purpose.
Inputs
- threshold-id => threshold-id
The unique identifier of a threshold that needs to be deleted.
Outputs
Terminate a counter-thresholds-list iteration and clean up any saved info. Privilege required is DFM.Database.Read. Only thresholds on objects over which the user has DFM.Database.Read permissions will be returned.Inputs
- tag => string
The tag from a previous perf-threshold-list-iter call.
Outputs
The perf-threshold-list-info-iter-* list of APIs are used to retrieve the list of all counters on which thresholds have been set. It loads the list of counters on which thresholds have been set into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the threshold related counters in the temporary store. Privilege required is DFM.Database.Read. Only thresholds on objects over which the user has DFM.Database.Read permissions will be returned.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
The tag from a previous perf-threshold-list-iter call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
The perf-threshold-list-info-iter-* list of APIs are used to retrieve the list of all counters on which thresholds have been set. It loads the list of counters on which thresholds have been set into a temporary store. The API returns a tag that identifies a temporary store so that subsequent APIs can be used to iterate over the threshold related counters in the temporary store. Privilege required is DFM.Database.Read. Only thresholds on objects over which the User has DFM.Database.Read permissions will be returned.Inputs
- query => threshold-info, optional
Attributes of the thresholds that need to be listed. If none of the parameters are specified, then all the thresholds will be listed. If a single threshold-id is specified, then only information relevant to that threshold will be returned and the rest of the parameters will be ignored.
If only an object-name-or-id is specified, then only thresholds set on that object will be returned. Ex. if object-name-or-id identifies a volume, thresholds set on that volume only will be returned. If object-name-or-id resolves to more than one volume, thresholds set on all of them will be returned. If no object-name-or-id is provided, all thresholds will be listed.
Likewise, if only perf-object-counter is specified, only thresholds that belong to this counter-name and object type will be returned. If no counter-name is specified, thresholds set on any counter is returned.
If both object-name-or-id and perf-object-counter is specified, then only thresholds set on a combination of all these parameters will be listed.
The rest of the parameters in threshold-info are ignored.
Outputs
- records => integer
Number indicating how many items are available for future retrieval via calls to perf-threshold-list-info-iter-next
- tag => string
Tag to be used in subsequent calls to perf-threshold-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Terminate a perf-thresholds-list-info2 iteration and clean up any saved info.Inputs
- tag => string
The tag from a previous perf-threshold-list-info2-iter-start call.
Outputs
The perf-threshold-list-info2-iter-* list of APIs are used to retrieve the list of all objects on which thresholds have been set.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
The tag from a previous perf-threshold-list-info2-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
The perf-threshold-list-info2-iter-* list of APIs are used to retrieve the list of objects on which thresholds have been set. It loads the list of thresholds into a temporary store. The API returns a tag that identifies the temporary store so that subsequent APIs can be used to iterate over the thresholds in it. The user must have the capability to perform the DFM.Database.Read operation on the object on which the threshold is applied, and only thresholds which are applied on such objects will be returned.Inputs
- object-name-or-id => obj-name-or-id, optional
If specified, only thresholds set on this object will be returned. This includes thresholds that apply to this object via inheritance. For example, if there is a threshold on an aggregate counter applied to a filer, and obj-name-or-id identifies an aggregate in that filer, the threshold will be returned.
- threshold-id => threshold-id, optional
If specified, only information relevant to the threshold with this id will be returned. This parameter takes precedence over all others, and it may not be specified in combination with any of the other arguments.
Outputs
- records => integer
Number indicating how many items are available for future retrieval via calls to perf-threshold-list-info2-iter-next
- tag => string
Tag to be used in subsequent calls to perf-threshold-list-info2-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Allows modification of threshold value and threshold interval that have been set before. Privilege required is DFM.Database.Write.Inputs
- attributes => threshold-info
Attributes of a threshold that needs to be modified. If the threshold-id is specified, then object-name-or-id and perf-object-counter are ignored. If the threshold-id is not specified, then object-name-or-id and perf-object-counter are mandatory. The rest of the parameters are optional. If the threshold-value specified is in a different metric, then the threshold-unit needs to be specified for appropriate conversion.
Outputs
Modify an existing threshold. This ZAPI should be used only for changing the parameters of a threshold, not for changing the objects it is not applied on. The objects-info structure is ignored in this ZAPI. This ZAPI cannot be used to modify template thresholds, and perf-threshold-template-modify should be used for that purpose. The user must have the capability to perform the DFM.Database.Write operation on the object on which the threshold is applied.Inputs
- threshold-info2 => threshold-info2
Information on the threshold to modify. The threshold-id parameter must be specified to modify a threshold.
Outputs
Attach one or more objects to a performance template. Either all input objects get attached or none of them get attached. The specified objects will get associated to the applicable thresholds in the template. Objects cannot be attached if the template has no thresholds. Objects already attached to the template are left unchanged. The user must have the capability to perform the DFM.Database.Write operation on the object to which the template is to be attached.Inputs
Outputs
Creates a template for perf thresholds.Inputs
Outputs
Deletes a template.Inputs
- template-name-or-id => string
The name or id of the template that is to be deleted.
Outputs
- template-id => integer
The id of the deleted template. Range: [1..2^32-1]
- template-name => string
The name of the deleted template.
Detaches one or more objects from a performance template. Either all or none of the input objects get detached. The user must have the capability to perform the DFM.Database.Write operation on the object which is to be detached from the template.Inputs
- template-name-or-id => string
Name or id of a performance template
Outputs
Terminate a perf-threshold-template-list iteration and clean up any saved information.Inputs
- tag => string
The tag from a previous perf-threshold-template-list-iter-start call.
Outputs
The perf-threshold-template-list-info-iter-* list of APIs are used to retrieve the list of all objects on which thresholds have been set.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
The tag from a previous perf-threshold-template-list-info-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
The perf-threshold-template-list-info-iter-* list of APIs are used to retrieve the list of all threshold templates that have been created. It loads the templates into a temporary store. The API returns a tag that identifies a temporary store so that subsequent APIs can be used to iterate over the threshold related counters in the temporary store.Inputs
- include-objects => boolean, optional
If true, the output will include objects on which the template(s) apply. Only the direct objects will be returned. Only objects on which the user has the capability to perform the DFM.Database.Read operation will be returned. Default value is false.
Outputs
- records => integer
Number indicating how many items are available for future retrieval via calls to perf-threshold-template-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to perf-threshold-template-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
The perf-threshold-template-modify zapi is used to modify an existing threshold template. It can be used to add, remove or modify thresholds in the template or modify attributes of the templateInputs
- is-enabled => boolean, optional
This specifies if the template is enabled or not. If not specified, the enabled/disabled state of the template is not modified.
- member-thresholds => threshold-info2[], optional
Specifies the thresholds that are to be part of this template.
- template-description => string, optional
A new description for the template, if the description is to be modified.
- template-id => integer
Specifies the id of the template that is to be modified. Range: [1..2^32-1]
Outputs
List all the objects associated with the view.Inputs
- view-name => string
The view for which to list associated objects
Outputs
Create a performance view. A performance view consists of one or more charts, but each view refers to only a single counter group. Global DFM.PerfView.Write RBAC capability is required to create normal views while Global DFM.PerfView.RealTimeRead is required to create real-time view.Inputs
- data-sources => perf-data-source[], optional
Identifiers for the data sources to be retrieved in later queries. The data-sources can not contain 'group' data source. i.e. instance-name must be specified for each perf-data-source. The data-sources should not contain a data source that specifies label-names field. This is because counter group always collects data for the whole counter i.e. for all labels. perf-data-source is not required for default counter groups. The perf-counter-group-modify API ignores this element for default counter groups.
- real-time => boolean, optional
Designates whether the counter group is real-time, in other words, if no user is getting data from the group, then the counter group will no longer get data. By default FALSE.
Outputs
Destroy a performance view. Global DFM.PerfView.Delete is required for destroying normal views while Global DFM.PerfView.RealTimeRead is required for destroying real-time views.Inputs
- appliance-name-or-id => string, optional
Name (or unique ID) of the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
- group-name-or-id => string, optional
The group that this view is associated with. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs.
- view-name => string
Name of the view to be destroyed.
Outputs
Retrieve data for a single data-source from a specific performance view. The data is extracted for the given time interval bounded by start-time and end-time.Inputs
- direction => string, optional
This field is used only when duration is specified It is used to derive the time period for which data is to be retrieved. If the value is set to 'forward', then the starting time is the time stamp of the oldest data. The ending time is the sum of time stamp of the oldest data and the duration. If the value is set to 'backward', then the starting time is the difference between time stamp of the most recent data and the duration. The ending time is the time stamp of the most recent data. If the value is set to 'backward-from-now', then the starting time is the difference between current time and the duration. The ending time is the current time. Default value is 'backward'
- duration => integer, optional
The duration for which the data is to be returned, in seconds. The time period is calculated based on the direction flag. Duration should not be specified when either start-time or end-time is specified. If specfied, EINVALIDINPUTERROR error will be returned. Range: [0..(2^32)-1]
- end-time => integer, optional
Time stamp marking the end of requested data. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. If unspecified, server should end with the latest available data Range: [0..(2^32)-1]
- number-samples => integer, optional
The total number of samples to be returned. The sample-rate will be calculated based on the start-time and end-time If the available samples have higher resolution than the one specified, multiple samples are consolidated into one sample to achieve the desired resolution, using the specified time-consolidation-method. If unspecified, the number of samples returned will be based on the sample-rate specified upon creation of the view. Either sample-rate or number-samples should be specified, but not both Range: [1..(2^32)-1]
- sample-rate => integer, optional
The desired interval between samples, in seconds. If the available samples have higher resolution than the one specified, multiple samples are consolidated into one sample to achieve the desired resolution, using the specified time-consolidation-method. If unspecified, the data returned will be at the sample-rate specified upon creation of the view Range: [1..(2^32)-1]
- start-time => integer, optional
Time stamp marking the beginning of requested data. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. If unspecified, server should begin with the earliest available data. Range: [0..(2^32)-1]
- time-consolidation-method => string, optional
A function to apply across the data to achieve the desired time resolution, if the requested sample-rate is different from the actual sample-rate of the data. Possible values : 'average' 'min' 'max' 'last' Default is 'last'
- view-name => string
The view from which data is to be retrieved Maximum length is 255
Outputs
- counter-data => string, optional
A single array of the retrieved data. The format is a series of comma-separated timestamp:value pairs. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. The values may have optional decimal extensions, for example 1064439599:127,1064439600:98.6,1064439601:12
- unit => string, optional
Unit of counter-data. This element will not be present if the unit is not known. Some possible values are per_sec, percent, b_per_sec, kb_per_sec, msecs, usecs. Maximum Length: 32 characters.
Terminate a view list iteration and clean up any saved info. Privilege required is read.Inputs
- tag => string
Tag from a previous perf-view-list-iter-start
Outputs
Retrieve items from a previous call to perf-view-list-iter-start Privilege required is read.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
The tag from a previous perf-view-list-iter- call.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Initiates a query for a list of performance views. Privilege required is read.Inputs
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards. Only the views for matching storage system are returned.
- custom-only => boolean, optional
If TRUE, returns only custom views, by default FALSE.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the group-list-iter-next APIs. This is ignored if appliance-name-or-id has been specified. Only the views for matching group are returned.
- include-empty-views => boolean, optional
If false, the following will be removed from the output views: - lines without any data source
- charts without any lines
- views without any charts
By default, this is false.
- instance-name => string, optional
If specified, only views with a matching instance will be returned. Otherwise, all performance views will be returned (subject to the other optional parameters). You must specify appliance-name-or-id and object-name when specifying instance-name.
- object-name => string, optional
If specified, only views with a matching object will be returned. Otherwise, all performance views will be returned (subject to the other optional parameters).
- view-name => string, optional
If view-name is specified, only the indicated view will be retrieved. Otherwise, all performance views will be returned (subject to the other optional parameters).
Outputs
- records => integer
Number indicating how many items are available for future retrieval via calls to perf-view-list-iter-next
- tag => string
Tag to be used in subsequent calls to perf-view-list-iter-next or perf-view-list-iter-end
Modify an existing performance view. Global DFM.PerfView.Write RBAC capability is required to modify performance views.Inputs
- appliance-name-or-id => string, optional
Name (or unique ID) of the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
- group-name-or-id => string, optional
The group that this view is associated with. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs.
- view => perf-view
The new performance view information. Note that the new performance view name may be different from the old performance view name. If the new name is different, the new name must not already exist.
- view-name => string
Name of the performance view to modify.
Outputs
Associates an object with a viewInputs
- object-name-or-id => obj-name-or-id
The object to be associated with the view
- view-name => string
The view for which the association is to be set
Outputs
Removes the association of an object with a viewInputs
- object-name-or-id => obj-name-or-id
The object to be dis-associated with the view
- view-name => string
The view for which the association is to be deleted
Outputs
Create a new provisioning policy by making a copy of an existing policy. The new policy created using this ZAPI has the same set of properties as the existing policy.
Error conditions: - EACCESSDENIED - User does not have privileges to read the existing policy from the database, or create a new policy, or both.
- EOBJECTNOTFOUND - No existing policy was found that has the given name or ID.
- EOBJECTAMBIGUOUS - Multiple objects with the given name present in database.
- EPOLICYEXISTS - A policy with the given provisioning-policy-name already exists.
- EDATABASEERROR - A database error occurred while processing the request.
- EINVALIDINPUTERROR - Invalid input was provided.
Inputs
- provisioning-policy-description => string, optional
Description of the new policy. It may contain from 0 to 255 characters. If the length is greater than 255 characters, the ZAPI fails with error code EINVALIDINPUTERROR. The default value is the empty string "".
- source-policy-name-or-id => obj-name-or-id
The name or ID of an existing policy that is copied to create the new policy.
Outputs
This ZAPI creates a new provisioning policy. Error conditions: - EACCESSDENIED - User does not have privileges to create policies.
- EDATABASEERROR - A database error occurred while processing the request.
- EINVALIDINPUTERROR - Invalid input was provided.
- EPOLICYEXISTS - A provisioning policy with given name already exists.
Inputs
Outputs
- provisioning-policy-id => obj-id
Object ID of the newly created policy.
Destroy a provisioning policy. This removes it from the database. If the policy has been applied to any dataset nodes, then the destroy operation fails; it must first be disassociated from all the dataset nodes to which it has been associated and then destroyed. Error conditions:
- EACCESSDENIED - User does not have DFM.Policy.Delete on the policy being destroyed.
- EOBJECTNOTFOUND - The specified provisioning policy does not exist in the database.
- EPROVPOLICYINUSE - The policy is assigned to one or more datasets.
- EDATABASEERROR - A database error occurred while processing the request.
- EOBJECTAMBIGUOUS - Multiple objects with the given name present in database.
- EEDITSESSIONINPROGRESS - The provisioning policy being destroyed locked in an edit session.
Inputs
- force => boolean, optional
force delete even if there is an edit session in progress on the provisioning policy.
- provisioning-policy-name-or-id => obj-name-or-id
Name or id of the provisioning policy being destroyed.
Outputs
Create an edit session and obtain an edit lock on a provisioning policy to begin modifying the policy. An edit lock must be obtained before invoking provisioning-policy-modify.
Use provisioning-policy-edit-commit to end the edit session and commit the changes to the database.
Use provisioning-policy-edit-rollback to end the edit session and discard any changes made to the policy.
24 hours after an edit session on a policy begins, any subsequent call to provisioning-policy-edit-begin for that same policy automatically rolls back the existing edit session and begins a new edit session, just as if the call had used the force option. If there is no such call, the existing edit session simply continues and retains the edit lock.
Error conditions: - EEDITINPROGRESS - Another edit session already has an edit lock on the specified provisioning policy.
- EOBJECTNOTFOUND - No provisioning policy was found that has the given name or ID.
- EACCESSDENIED - User does not have DFM.Policy.Write privilege on the policy. modify the provisioning policy.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- force => boolean, optional
If true, and an edit session is already in progress on the specified policy, then the previous edit is rolled back and a new edit is begun. If false, and an edit is already in progress, then the call fails with error code EEDITINPROGRESS. Default value is false.
- provisioning-policy-name-or-id => obj-name-or-id
Name or ID of a provisioning policy.
Outputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. Range: [0..(2^31)-1]
Commit changes made to a provisioning policy during an edit session into the database. If all the changes to the policy are performed successfully, the entire edit is committed and the edit lock on the policy is released.
If any of the changes to the policy are not performed successfully, then the edit is rolled back (none of the changes are committed) and the edit lock on the policy is released.
Use the dry-run option to test the commit. Using this option, the changes to the policy are not committed to the database.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have DFM.Policy.Write on the policy.
- EPOLICYEXISTS - The policy's name is being changed, and a policy with the new name already exists.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- dry-run => boolean, optional
If true, return a list of the actions the system would take after committing the changes to the policy, but without actually committing the changes. In addition, the edit lock is not released. By default, dry-run is false.
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by provisioning-policy-edit-begin ZAPI.
Outputs
- dry-run-results => dry-run-result[], optional
Results of a dry run. Only returned if dry-run is true.
Roll back changes made to a provisioning policy. The edit lock on the policy will be released after the rollback.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EACCESSDENIED - User does not have privileges to modify the policy.
Inputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by provisioning-policy-edit-begin ZAPI.
Outputs
Terminate a list iteration that had been started by a call to provisioning-policy-list-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- tag => string
The opaque handle returned by the prior call to provisioning-policy-list-iter-start that started this list iteration.
Outputs
Retrieve the next series of policies that are present in a list iteration created by a call to provisioning-policy-list-iter-start. The server maintains an internal cursor pointing to the last record returned. Subsequent calls to provisioning-policy-list-iter-next return the next maximum records after the cursor, or all the remaining records, whichever is fewer.
Error conditions: - EINVALIDTAG - The specified tag does not exist.
Inputs
- maximum => integer
The maximum number of policies to return. Range: [1..2^31-1].
- tag => string
The opaque handle returned by the prior call to provisioning-policy-list-iter-start that started this list iteration.
Outputs
- records => integer
Number of records actually returned in the output. Range:[0..2^31-1]
Begin a list iteration over all content in all provisioning policies in the system. Optionally, you may iterate over the content of just a single policy. After calling provisioning-policy-list-iter-start, you continue the iteration by calling provisioning-policy-list-iter-next zero or more times, followed by a call to provisioning-policy-list-iter-end to terminate the iteration.
Error conditions: - EACCESSDENIED - User does not have privileges to read the specified policy.
- EOBJECTNOTFOUND - No policy was found that has the given name or ID.
- EDATABASEERROR - A database error occurred while processing the request.
Inputs
- is-policy-readonly => boolean, optional
filter provisioning policies based on whether they can be modified or not If true, all the policies will be listed, including both modifiable and non-modifable If false, lists the the policies that can be modified
- provisioning-policy-name-or-id => obj-name-or-id, optional
Name or ID of the provisioning policy. If provisioning policy name or ID is specified, then provisioning-policy-type filter is ignored.
Outputs
- records => integer
Number of items present in the list iteration. Range:[0..2^31-1].
- tag => string
An opaque handle used to identify the list iteration. The list content resides in a temporary store in the server.
This ZAPI modifies the provisioning policy settings of an existing policy in the database with the new values specified in the input. Note: type of provisioning policy cannot be modified after creation. Before modifying the policy, an edit lock has to be obtained on the policy object.
Error conditions: - EEDITSESSIONNOTFOUND - No edit lock was found that has the given ID.
- EEDITSESSIONCONFLICTINGOP - current modification made conflicts with previous change in the edit session.
- EACCESSDENIED - User does not have privileges to modify the policy.
- EOBJECTNOTFOUND - The policy was already destroyed during this edit session.
- EOBJECTAMBIGUOUS - Multiple objects with the given name present in database.
- EINVALIDINPUT - The requested modification is not applicable to the policy being modified.
- EDATABASEERROR - A database error occurred while processing the request.
- EDSCONFLICTDEDUPLICATION - Deduplication schedule cannot be set in the policy if it is attached to a SnapVault destination node. Valid only for secondary provisioning policy.
Inputs
- edit-lock-id => integer
Identifier of the edit lock on the policy. The value must be an edit lock ID that was previously returned by provisioning-policy-edit-begin.
- provisioning-policy-info => provisioning-policy-info
New values for provisioning policy attributes. Any value specified in the provisioning-policy-info replaces the existing values of attributes. If an optional element is not specified no change is made to the existing attribute settings in database.
Outputs
The qtree-list-info-iter-* set of APIs are used to retrieve the list of qtrees. qtree-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the qtree-list-info-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check qtree-list-info-iter-start. The qtree-list-info-iter-next API is used to iterate over the members of the Qtrees stored in the temporary store created by the qtree-list-info-iter-start API.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous qtree-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by qtree-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
The qtree-list-info-iter-* set of APIs are used to retrieve the list of qtrees in DFM. qtree-list-info-iter-start returns the union of qtree objects specified, intersected with is-snapvault-secondary-qtrees, is-in-dataset and rbac-operation. It loads the list of qtrees into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the qtrees in the temporary store. If qtree-list-info-iter-start is invoked twice, then two distinct temporary stores are created.Inputs
- include-is-available => boolean, optional
If true, the is-available status is calculated for each qtree which may make the call to this zapi take much longer. Default is false.
- is-direct-member-only => boolean, optional
If true, only return the qtrees that are direct members of the specified resource group. Default value is false. This field is meaningful only if a resource group name or id is given for the object-name-or-id field.
- is-direct-vfiler-child => boolean, optional
If true, only list qtrees that are direct children of a Vfiler. If false, only list qtrees that are indirect children of a Vfiler. If not specified, list qtrees that are direct and indirect children of a Vfiler. This filter does not have any effect if object-name-or-id is not a Vfiler.
- is-dp-ignored => boolean, optional
If true, only list qtrees that have been set to be ignored for purposes of data protection. If false, only list qtrees that have been not set to be ignored for purposes of data protection. If not specified, list all qtrees without taking into account whether they have been ignored or not.
- is-in-dataset => boolean, optional
If true, only list qtrees which only contain data which is protected by a dataset. If false, only list qtrees containing data which is not protected by a dataset. If not specified, list all qtrees whether they are in a dataset or not.
- is-unprotected => boolean, optional
If true, only list qtrees that are not protected, which means they are not in any SnapMirror or SnapVault relationship. If false or not set, list all qtrees.
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => string, optional
Name or identifier of an object to list qtrees for. The allowed object types for this argument are: - Resource Group
- Dataset
- Storage Set
- Host
- Aggregate
- Volume
- Qtree
- GenericAppObject
If object-name-or-id identifies a qtree, that single qtree will be returned. If object-name-or-id resolves to more than one qtree, all of them will be returned. If no object-name-or-id is provided, all qtrees will be listed. If object-name-or-id identifies a dataset or a storage set, only qtrees that are direct members of that dataset or storage set will be returned.
- rbac-operation => string, optional
Name of an RBAC operation. If specified, only return qtrees for which authenticated admin has the required capability. A capability is an operation/resource pair. The resource is the volume where the qtree lives. The possible values that can be specified for operation can be obtained by calling rbac-operation-info-list. If operation is not specified, then it defaults to DFM.Database.Read. For more information about operations, capabilities and user roles, see the RBAC APIs.
- sort-results => boolean, optional
If true, sort the results by qtree name in ascending alphabetical order using US ASCII lexographic rules.. Default value is true. The results are always sorted unless all qtrees in the system are requested and "sort-results" is set to false.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with qtree-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to qtree-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Modify a qtree's information. If modifying of one property fails, nothing will be changed.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified qtree.
- EINVALIDINPUT - When invalid input specified.
- EOBJECTNOTFOUND - When the qtree-name-or-id does not correspond to a qtree.
- EDATABASEERROR - On database error.
Inputs
- is-dp-ignored => boolean, optional
True if an administrator has chosen to ignore this object for purposes of data protection.
Outputs
Rename a qtree on a storage system and in the DataFabric Manager database. The new qtree will still be in the same volume as the original qtree. The first step renames the given qtree on the storage system. If that fails, then processing stops and the API emits an EINTERNALERROR to the caller along with the appropriate error message. The second step renames the given qtree on the DFM database. If that fails, then processing stops and the same EINTERNALERROR is emitted back to the caller along with an appropriate error message. There is no retrying or undoing of any of the steps should they fail. The API relies on the DFM monitor to undo the rename automatically. However, the undoing does not happen right away because it depends on the DFM monitor regular update schedule. The DFM monitor periodically ensures that storage system resources are matched in its database -- it updates the database to be consistent with the storage system. Prior to invoking this API, the storage system's login credentials where the qtree resides must be specified in DFM's database using normal DFM procedure.Inputs
- qtree-name-new => string
New name for qtree on storage system. This is just the qtree name, not including the storage system or volume parts.
- qtree-name-or-id => string
Qtree to rename. If a name is specified, then it must in the form specified by DFM's normal container and containment rules such as hostname:/volume/qtree_name (for example, breeze:/vol0/myqtree). Range: [1..2^31-1]
Outputs
Start monitoring a previously un-monitored primary qtree from the DataFabric Manager. Error EQTREEMONITORONFAIL means that an attempt to start monitoring the specified qtree failed.Inputs
- qtree-name-or-id => string
A qtree name or identifier to start monitoring. Name must in the form specified by DFM's normal container and containment rules such as hostname:/volume/qtree_name (for example, breeze:/vol0/myqtree). Range: [1..2^31-1]
Outputs
Stop monitoring a primary qtree from the DataFabric Manager. A qtree that is being managed by an application cannot be stopped being monitored (errno returned will be EQTREEMANAGEDBYAPP)Inputs
- qtree-name-or-id => string
A qtree name or identifier to stop monitoring. Name must in the form specified by DFM's normal container and containment rules such as hostname:/volume/qtree_name (for example, breeze:/vol0/myqtree). Range: [1..2^31-1]
Outputs
Checks whether the given admin or usergroup has access to the specified resource. For example, rbac-access-check will return "allow" or "deny" on the following query: Is admin joe allowed to configure storage system, host1.abc.xyz.com, from DFM? One could pass the following as input to answer this question: admin=joe operation=DFM.Event.Read resource=host1.abc.xyz.com In order to prevent an admin from querying everyone's privileges on the system, the system will only allow admins to check their own access by cross-referencing with however they authenticated to the API server. If the admin has Full Control, or has the privilege to query other admin's access, then they will be allowed to make the query. Per software security best practice, this API limits error reporting when access is denied on a particular resource.Inputs
Outputs
- global-usergroup-status => string
If roles assigned to global usergroup accounts where considered, but it was not possible to get glogal group account information from the system, then global-usergroup-status will contain the reason why it was not possible to get global group account information from the system. An empty value in this field means that roles assigned to global usergroup accounts were considered and it was possible to obtain this information from the system
- local-usergroup-status => string
If roles assigned to local usergroup accounts where considered, but it was not possible to get local group account information from the system, then local-usergroup-status will contain the reason why it was not possible to get local group account information from the system. An empty value in this field means that roles assigned to local usergroup accounts were considered and it was possible to obtain this information from the system,
- result => string
Result of whether or not the given admin or usergroup is allowed to perform the specified action on the given resource. In essence, it answers whether the given admin or usergroup can perform the specified operation on the the given resource. Possible values: "allow" for access allowed and "deny" for access denied.
Ends listing of admins.Inputs
- tag => string
Tag returned from rbac-admin-list-iter-start.
Outputs
Returns items from list generated by rbac-admin-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
The tag returned in rbac-admin-list-iter-start call.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Lists all the administrators and their attributes.Inputs
Outputs
- records => integer
Number indicating how many items are available for future retrieval with rbac-admin-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to rbac-admin-list-info-iter-next or rbac-admin-list-info-iter-end.
Assign an existing role to an existing administrator or usergroup. The administrator effectively gains the capabilities from the role and its inherited roles. As for a usergroup, all members of the usergroup will gain the capabilities assigned to that role and its inherited roles.Inputs
- admin-name-or-id => string
The admin or usergroup name or object id of an admin or usergroup to add role to.
Outputs
List the administrators or usergroups assigned to an existing role directly or indirectly. In essence, this API lists the admins or usergroups that have the capabilities of the given role. This API drills into all the possible ways that an admin or usergroup can effectively have the given role. Admins or usergroups are assigned roles indirectly via role inheritance or usergroup assignment (note: a usergroup can be a member of another usergroup). So an admin or usergroup will be listed if any of the following conditions apply: 1. Given role is directly assigned to the admin or usergroup 2. Admin or usergroup has a role directly assigned that inherits given role. 3. Admin or usergroup gains the given role via usergroup membershipInputs
- role-name-or-id => string
An role name or object id of a role.
Outputs
Remove one or more roles from an administrator or usergroup. The admin will no longer have the capabilities gained from the role(s) and its inherited roles. As for a usergroup, the members of the usergroup will no longer have the capabilities gained from the role(s) and its inherited roles. If delete-all is not specified or is FALSE, then role-name-or-id must be specified. If delete-all is TRUE, then all roles assigned to admin will be removed.Inputs
- admin-name-or-id => string
An admin or usergroup name or object id of an admin or usergroup to remove role from. If delete-all is not specified or is FALSE then role-name-or-id must be specified.
Outputs
Add a new operation to the RBAC system. An operation is an ability to perform an action on a particular resource type. An operation is tied to a specific application so that different applications are able to manage access control that are specific to them.Inputs
Outputs
Delete an existing operationInputs
- operation => string
Operation to delete
Outputs
Get information about an existing operation or all operations in the system.Inputs
Outputs
Add a new role to the RBAC systemInputs
- description => string, optional
Description of the role. The maximum length is 255 characters.
Outputs
List the roles assigned to an existing administratror or usergroup. A role is considered assigned to the administrator if that role is gained directly or indirectly via role inheritance or usergroup membership.Inputs
- admin-name-or-id => string
An administrator or usergroup name or object id of an administrator or usergroup to list roles assigned.
- follow-role-inheritance => boolean, optional
If TRUE, return all roles that given role inherits directly and indirectly. If FALSE or not set, return only roles that are directly assigned to the given administrator or usergroup.
Outputs
- admin-name-or-id => rbac-admin-name-or-id
The name of the admin or usergroup or the object id of the admin or usergroup.
- global-usergroup-status => string, optional
If roles assigned to global usergroup accounts where considered, but it was not possible to get glogal group account information from the system, then global-usergroup-status will contain the reason why it was not possible to get global group account information from the system. An empty value in this field means that roles assigned to global usergroup accounts were considered and it was possible to obtain this information from the system
- local-usergroup-status => string, optional
If roles assigned to local usergroup accounts where considered, but it was not possible to get local group account information from the system, then local-usergroup-status will contain the reason why it was not possible to get local group account information from the system. An empty value in this field means that roles assigned to local usergroup accounts were considered and it was possible to obtain this information from the system,
Add an existing resource/operation pair to a role. In essence, this adds a capability to a role.Inputs
- operation => string
An existing operation to add to the specified role.
- role-name-or-id => string
Role name or object id of the role to add the capability (operation/resource pair)
Outputs
Remove one or more capabilities (resource/operation pair) from an existing role. If delete-all is TRUE, it removes all capabilities from given role. Otherwise, it removes only the given capability (resource/operation pair). If delete-all is not specified or is FALSE, then operation and resource must be specified.Inputs
- delete-all => boolean, optional
If TRUE, removes all the capabilities for given role. If FALSE, a valid operation must be provided in the operation parameter.
- operation => string, optional
An operation to remove. If delete-all is FALSE, the caller must provide a valid operation here.
- role-name-or-id => string
Role name or object id of the role to remove the capability.
Outputs
Delete an existing role from the RBAC systemInputs
- role-name-or-id => string
A role name or object id of a role to delete
Outputs
Disinherit one or more roles. The effect is that the affected role will no longer have the capabilities gained from the disinherited role(s). If disinherit-all is not specified or is FALSE, then disinherited-role-name-or-id must be specified.Inputs
- role-name-or-id => string
An existing role name or object id of a role to modify.
Outputs
Get the operations, capabilities and inherited roles that one or more roles have.Inputs
- follow-role-inheritance => boolean, optional
If TRUE, return all roles that given role inherits directly and indirectly. If FALSE or not set, return only roles that are directly inherited by given role.
- role-name-or-id => string, optional
Role name or object id of a role. If not specified, then it gets info on all the roles.
Outputs
Inherit from a role. The effect is that the affected role will gain the capabilities from the inherited role.Inputs
- role-name-or-id => string
A role name or object id of a role to modify.
Outputs
Modify an existing role name and/or its description.Inputs
- role-name-or-id-old => string
A role name or object id of a role to modify. Either role-name-new or role-description-new or both must be specified. The object Id of a role cannot be modified.
Outputs
Api to list report-categories and its details.Inputs
- category-name-or-id => string, optional
The name or id of the category. If specified, only the specified category along with its sub categories are listed. If not specified, all categories and their sub categories will be listed.
- category-provenance => string, optional
Possible values:"custom" returns only custom categories. "canned" returns only canned categories. If not specified both custom and canned categories are returned.
Outputs
API to list reports created using report designer.Inputs
- category-name-or-id => string, optional
Contains the category name or the id. If specified, only reports which are in the specified category are returned. If not specified, all the reports will be listed.
Outputs
API to schedule a report. Only reports listed by report-designer-list API can be scheduled using this API. It creates a new schedule based on schedule-content-info and a new report schedule object with the schedule and corresponding report being scheduledInputs
Outputs
API to delete a report schedule object and its associated schedule. Only report schedules for reports returned by report-designer-list API can be deleted using this API.Inputs
Outputs
API to modify a report schedule and/or its schedule details. Only report schedules for reports returned by report-designer-list API can be modified using this APIInputs
- report-schedule-info => report-schedule-info, optional
Details of modifiable contents of a report schedule. If this is not specified, schedule-content-info should be specified.
- report-schedule-name-or-id => obj-name-or-id
Specifies the name or id of the report schedule to be modified.
- schedule-content-info => schedule-content-info, optional
Details of a schedule. If this is not specified, report-schedule-info should be specified.
Outputs
Shares report output of specified report in the specified format to the mentioned email-address-list.Inputs
- subject-field => string, optional
The string which should be sent as subject field in the email. Default: Generated Report: (provided-by-user) on (provided by user).
Outputs
Terminate a view list iteration and clean up any saved info.Inputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Returns items from a previous call to report-graph-list-info-iter-startInputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Initiates a query for a list of graphs for a particular report.Inputs
- report-name-or-id => string
Name of the report for which graphs have to be listed. For custom reports it can either be name or id. Range for id: [1..(2^31)-1]
- target-object-name-or-id => obj-name-or-id
Name or id of the object for which graphs have to be listed. Range for id: [1..(2^31)-1]
Outputs
- records => integer
Number indicating how many items are available for future retrieval with report-graph-list-info-iter-next. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store. Used in subsequent calls to report-graph-list-info-iter-next or report-graph-list-info-iter-end.
Terminate a view list iteration and clean up any saved info.Inputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Returns items from a previous call to report-list-info-iter-startInputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
- records => integer
The number of records actually returned. Range: [0..2^31-1].
Initiates a query for a list of reports that can be scheduled.Inputs
- report-application => report-application, optional
Specifies the application for which reports have to be listed. If not specified then all the reports are listed.
Outputs
- records => integer
Number indicating how many items are available for future retrieval with report-list-info-iter-next. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store. Used in subsequent calls to report-list-info-iter-next or report-list-info-iter-end.
Deletes a report output.Inputs
Outputs
Terminate a view list iteration and clean up any saved info.Inputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Returns items from a previous call to report-output-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Initiates a query for a list of report outputs.Inputs
- is-successful => boolean, optional
Specifies whether the status of the report output was successful. If the input element is not present then all (success and fail) the report outputs are listed.
- report-application => report-application, optional
Specifies the application for which reports have to be listed. Default is 'control_center'.
- report-name-or-id => string, optional
Name of the report. For custom reports it can either be name or id. Range for id: [1..(2^31)-1]
- report-output-id => integer, optional
Specifies the id of the report output. If this is specified then filtering based on object-name-or-id and report-schedule-name-or-id is ignored. Range: [1..(2^31)-1]
- report-schedule-name-or-id => obj-name-or-id, optional
Specifies the name or id of the report schedule for which the output has to be listed. Range for id: [1..(2^31)-1]
- target-object-name-or-id => obj-name-or-id, optional
Specifies one of the following: - the name or id of the group. - the name or id of the object. Range for id: [1..(2^31)-1]
Outputs
- records => integer
Number indicating how many items are available for future retrieval with report-output-list-info-iter-next. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store. Used in subsequent calls to report-output-list-info-iter-next or report-output-list-info-iter-end.
Reads report output data from a file. API will fail if length exceeds 1 MB.Inputs
- report-output-id => integer
Specifies the id of the report output. Range: [1..(2^31)-1]
Outputs
- length => integer
Number of bytes actually read from the file. If this value is 0, then you have attempted to read at or past the end of the file. Range: [0..2^20]
Add a new report schedule.Inputs
Outputs
- report-schedule-id => obj-id
Specifies the id of the report schedule created. Range: [1..(2^31)-1]
Deletes a report schedule.Inputs
- report-schedule-name-or-id => obj-name-or-id
Name or id of the report schedule. Range for id: [1..(2^31)-1]
Outputs
Disable a report schedule.Inputs
- report-schedule-name-or-id => obj-name-or-id
Name or id of the report schedule. Range for id: [1..(2^31)-1]
Outputs
Enable a report schedule.Inputs
- report-schedule-name-or-id => obj-name-or-id
Name or id of the report schedule. Range for id: [1..(2^31)-1]
Outputs
Terminate a view list iteration and clean up any saved info.Inputs
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Returns items from a previous call to report-schedule-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store.
Outputs
Initiates a query for a list of report schedules.Inputs
- is-enabled => boolean, optional
Specifies whether the state of the report schedule is enabled. If the input element is not present then all (enabled and disabled) the report schedules are listed.
- report-application => report-application, optional
Specifies to list only those report schedules which involve reports that belong to this application. Default is 'control_center'.
- report-name-or-id => string, optional
Name of the report. For custom reports it can either be name or id. Range for id: [1..(2^31)-1]
- report-schedule-name-or-id => obj-name-or-id, optional
Specifies the name or id of the report schedule to be listed. If this is specified then filtering the report schedules based on report-name-or-id, report-application, target-object-name-or-id if any is ignored. If not specified then only the report schedules satisfying all the conditions will be listed. Range for id: [1..(2^31)-1]
- target-object-name-or-id => obj-name-or-id, optional
Specifies one of the following: - the name or id of the group. - the name or id of the object. If specified, only the report schedules which have the target object as that of the input are listed. Range for id: [1..(2^31)-1]
Outputs
- records => integer
Number indicating how many items are available for future retrieval with report-schedule-list-info-iter-next. Range: [1..(2^31)-1]
- tag => string
An opaque handle used by the DFM station to identify a temporary store. Used in subsequent calls to report-schedule-list-info-iter-next or report-schedule-list-info-iter-end.
Modify a report schedule.Inputs
- report-schedule-name-or-id => obj-name-or-id
Specifies the name or id of the report schedule to be modified. Range for id: [1..(2^31)-1]
Outputs
Runs a report schedule at that instant of time.Inputs
- report-schedule-name-or-id => obj-name-or-id
Name or id of the report schedule. Range for id: [1..(2^31)-1]
Outputs
- report-output-id => integer
Specifies the id of the report output generated. Range: [1..(2^31)-1]
Add member -- storage system or aggregate -- to an existing resource pool.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified resource pool or DFM.Database.Read privilege on the object being added.
- ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool does not exist.
- EOBJECTNOTFOUND - When the specified object of valid type to add is not found or object does not exist at all.
- EINVALIDMEMBERTYPE - When an aggregate to add contains traditional volume or it is aggregate snapshot.
- EOBJECTINANOTHERDYNAMICREFERENCE - When the specified object or its relative members, are already in another resource pool/storage set. Try again with move flag to move resources across resource pools. When used by storage set, you need to manually remove resource from storage set and add it to resource pool.
- EOBJECTINDIRECTMEMOFANOTHERDYNAMICREF - Object being added to Resource pool is in another Resource pool and it's a indirect member of the Resource pool.
- EOBJECTEXISTSINRESOURCEPOOL - When the specified object already exists in the resource pool.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous to denote whether its storage system or aggregate. Try again with fully qualified name or object id.
- EDATABASEERROR - On database error.
Inputs
- member-name-or-id => string
Name or identifier of the member to add to the resource pool. Possible values are name/id of storage system or aggregate. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
- move-if-exists => boolean, optional
If the object to add, or one or more of its contents, is already in a resource pool, this flag specifies that the object should be removed from the old dynamic reference and added to the resource pool. By default, the add operation fails if the object, or one of its contents, is already in a storage set or resource pool.
- resourcepool-name-or-id => string
Name or identifier of the resource pool to extend. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
Outputs
Create a new, empty resource pool.
Error Conditions: - EINVALIDINPUT - When invalid name specified. A valid name is non empty and constitutes at least one non-numeric character.
- ERESOURCEPOOLEXISTS - When the specified resource pool already exists.
- EDATABASEERROR - On database error.
Inputs
- resourcepool => resourcepool-info
New information about a resource pool to modify. If any field is not specified, that field will not be changed. Name field should be specified and others are optional. If the optional fields are specified, they would be set during creation itself.
Outputs
- resourcepool-id => integer
Identifier of the new resource pool. Its a valid DFM object id. A valid DFM object id would be in the range of [1..2^31-1].
Destroy a resource pool. If the resource pool is in use by a storage set or resource pool is not empty, the resource pool may only be destroyed by specifying the force flag. If the resource pool is in use by the storage service, the resource pool can be destroyed only after removing it from the storage service.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Delete capability on the specified resource pool.
- ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool does not exist.
- ERESOURCEPOOLNOTEMPTY - When the specified resource pool is not empty. Try again with force flag.
- ERESOURCEPOOLINUSE - When the specified resource pool is in use by a storage set. Try again with force flag.
- EDATABASEERROR - On database error.
Inputs
- force => boolean, optional
If specified, allows destroying a resource pool that has members or in use by storage set.
- resourcepool-name-or-id => string
Name or identifier of resource pool to destroy. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
Outputs
Get the default values of attributes defined by this ZAPI set.Inputs
Outputs
Ends iteration of resource pools.Inputs
- tag => string
Tag from a previous resourcepool-list-info-iter-start.
Outputs
Get next records in the iteration started by resourcepool-list-info-iter-start.Inputs
- maximum => integer
The maximum number of records to retrieve.
Range: [1..2^31-1]
- tag => string
Tag from a previous resourcepool-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Value of 0 records indicates that end of records.
Range: [1..2^31-1]
Starts iteration to list resource pools.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Read capability on the specified resource pools.
- ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool is not found in the database.
- EDATABASEERROR - On database error.
Inputs
- check-licenses => license[], optional
Filter for licensed services running on Data ONTAP. If this input is present, only resourepools containing atleast one aggregate hosted on a filer having these licenses will have resourcepool-is-provisionable flag set to TRUE.
- include-free-space => boolean, optional
If this value is true, then free space information will be included with each resource pool. If this value is false or not specified, then no free space information is included. resourcepool-space-status is returned only when include-free-space is true.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of a resource pool or group or dataset or storage service. If unspecified, all resource pools are listed.
- resource-tag => resource-tag, optional
If this input is present, only the resource pools having this tag or containing atleast one member having this tag will have resourcepool-is-provisionable flag set to TRUE.
Outputs
- records => integer
Number of items that have been saved for future retrieval with resourcepool-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to resourcepool-list-info-iter-next.
Ends iteration of resource pools.Inputs
- tag => string
Tag from a previous resourcepool-member-list-info-iter-start.
Outputs
Get next records in the iteration started by resourcepool-member-list-info-iter-start.Inputs
- maximum => integer
The maximum number of records to retrieve.
Range: [1..2^31-1]
- tag => string
Tag from a previous resourcepool-member-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Value of 0 records indicates that end of records.
Range: [1..2^31-1]
Starts iteration to list members of specified resource pool.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Read capability on the specified resource pools.
- ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool does not exist.
- EDATABASEERROR - On database error.
Inputs
- include-dataset-space-info => boolean, optional
If true, space utilized by datasets on each aggregate is returned. Default value: false. This is input is ignored when run-provisioning-checks element is true.
- include-indirect => boolean, optional
If true, indirect members are included. By default they are not included.
- member-type => string, optional
Type of object to be returned. Possible values are 'filer' or 'aggregate'.
- resourcepool-name-or-id => string
Name or identifier of the resource pool to query. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
- run-provisioning-checks => boolean, optional
If true, provisioning related checks are run on all the members of the resource pool using the provisioning policy and provisioning request info specied as part of provisioning-policy-name-or-id and provisioning-member-request-info inputs elements.
Outputs
- records => integer
Number of items that have been saved for future retrieval with resourcepool-member-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to resourcepool-member-list-info-iter-next.
Modify a resource pool's information. If modifying of one property fails, nothing will be changed.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified resource pool.
- EINVALIDINPUT - When invalid input specified. A resourcepool-name should be non empty and constitutes at least one non-numeric character.
The resourcepool-contact value should be email address which does not have any white space. - ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool does not exist.
- ERESOURCEPOOLEXISTS - Already a resource pool exists with the new name.
- EDATABASEERROR - On database error.
Inputs
- resourcepool => resourcepool-info
New information about a resource pool to modify. If any field is not specified, that field will not be changed. If none of the optional parameters are specified, then this API does nothing.
- resourcepool-name-or-id => string
Name or identifier of the resource pool to modify. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
Outputs
Modify the properties of members of resource pool.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified resource pool or DFM.Database.Write privilege on the object being modified.
- ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool does not exist.
- EOBJECTNOTFOUND - When the specified object of valid type to add is not found or object does not exist at all.
- EINVALIDMEMBERTYPE - When an object is not a mamber of resourcepool.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous to denote whether its filer or aggregate. Try again with fully qualified name or object id.
- EDATABASEERROR - On database error.
Inputs
- member-name-or-id => obj-name-or-id
Name or identifier of the member to modify in the resource pool.
- resource-tag => resource-tag, optional
Label for the resource pool member.
- resourcepool-name-or-id => obj-name-or-id
Name or identifier of the resource pool.
Outputs
Remove member -- storage system or aggregate -- from a resource pool.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Delete capability on the specified resource pool or DFM.Database.Read privilege on the object being removed.
- ERESOURCEPOOLDOESNOTEXIST - When the specified resource pool does not exist.
- EOBJECTNOTFOUND - When the specified object to remove not found.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous to denote whether its storage system or aggregate. Try again with fully qualified name or object id.
- EOBJECTNOTINRESOURCEPOOL - When the specified object does not exist in the resource pool.
- EDATABASEERROR - On database error.
Inputs
- member-name-or-id => string
Name or identifier of member to remove from the resource pool. Possible values are name/id of storage system or aggregate. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
- resourcepool-name-or-id => string
Name or identifier of the resource pool to modify. It should be a valid DFM object name or object id. A valid DFM object name should contain at least one non-numeric character. A valid DFM object id should be in the range of [1..2^31-1].
Outputs
Check the free space in the resource pool against the nearly-full and full thresholds and generate appropriate events for the resource pool.Inputs
- resourcepool-name-or-id => obj-name-or-id
Name or identifier of resource pool
Outputs
Returns the amount of space that would be freed when a set of Snapshot copies are deleted from a specified volume. This API gets information dynamically from the filer and is a blocking call.Inputs
- volume-name-or-id => obj-name-or-id
Name or identifier of the volume on which the reclaimable space has to be computed by deleting the specified set of Snapshot copies.
Outputs
- reclaimable-size => integer
Size in bytes of space reclaimable if the specified set of Snapshot copies were deleted. Range : [0..2^63-1].
- snapshot-shared-data => integer
Total number of bytes shared between active file system and snapshots after the given snapshots are deleted. Range: [0..2^63-1]
Ends iteration of Snapshot copies.Inputs
- tag => string
Tag from a previous snapshot-list-info-iter-start.
Outputs
Retrieve the next records in the iteration started by snapshot-list-info-iter-start.Inputs
- maximum => integer
The maximum number of records to retrieve.
Range: [1..2^31-1]
- tag => string
Tag from a previous snapshot-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Value of 0 records indicates end of records.
Range: [1..2^31-1]
Returns information on a list of Snapshot copies.Inputs
- include-backup-info => boolean, optional
If true and snapshot is part of a dataset backup version, the backup version information which contains this snapshot as member is returned. Default value is false.
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of the object whose Snapshot copies are to be listed. Valid types of objects are: - volume - Lists Snapshot copies of this volume
- aggregate - Lists Snapshot copies of all volumes in the aggregate
- filer - Lists all Snapshot copies of all volumes on this filer.
- vFiler - Lists all Snapshot copies for volumes exclusively belonging to this vFiler.
- Vserver - Lists all Snapshot copies for volumes exclusively belonging to this Vserver.
- dataset - Lists all Snapshot copies of all volumes that are members of all nodes of the dataset.
- storage set - Lists all Snapshot copies of all volumes that are members of the storage set.
- resource group - Lists all Snapshot copies of all volumes that are direct or indirect members of the resource group.
If this input is not specified, then the ZAPI lists all Snapshot copies known to DataFabric Manager server.
Outputs
- records => integer
Number of items that have been saved for future retrieval with snapshot-list-info-iter-next.
Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to snapshot-list-info-iter-next.
Add the SRM file type in dfm.Inputs
- srm-file-type-name => string
SRM file type. If an invalid file type is provided then error will be returned. Following characters are not allowed in file type: /, \, *, ?, <, >, | Also, a file type can start with a ... (dot) but should have at least one more character and cannot have a ... (dot) in between the file name.
Outputs
Delete the SRM file type in dfm. If any entry in the input is invalid then, none of the file types will be deleted and error will be returned.Inputs
Outputs
Returns all the SRM file types in dfm.Inputs
Outputs
Create a new storage service.Inputs
- group-name-or-id => obj-name-or-id, optional
Resource group to which the newly created storage service will be added to. User should have DFM.Service.Write capability on the specified group. Default value: Global group.
- protection-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the protection policy to associate with this storage service. The data protection license is required for this input.
- storage-service-name => obj-name
Name of the new storage service. It cannot be all numeric. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned.
Outputs
Ends iteration to list of datasets associated with a storage service.Inputs
- tag => string
Tag from a previous storage-service-dataset-list-iter-start.
Outputs
Get next few records in the iteration started by storage-service-dataset-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous storage-service-dataset-list-iter-start.
Outputs
- records => integer
The number of records actually returned.
Lists the association of datasets with storage services. If no service/dataset name or id is provided then all datasets with no storage service are listed.Inputs
- object-name-or-id => obj-name-or-id, optional
Name or identifier of the storage service, resource group or dataset. If a resource group is provided, storage services and datasets associated with them are returned. If this input is not provided then all datasets with no storage service are listed.
Outputs
- records => integer
Number of records that have been saved for future retrieval with storage-service-dataset-list-iter-next.
- tag => string
Tag to be used in subsequent calls to storage-service-dataset-list-iter-next.
Attach, detach, or clear a storage service to a dataset.Inputs
- assume-confirmation => boolean, optional
Value determining whether confirmation is given for all resolvable conformance actions that require user confirmation. If the value is true, all conformance actions which require user confirmation will be executed as if confirmation is already granted. If the value is false, all conformance actions which require user confirmation will not be executed. One key, and sometimes undesirable, resolvable action that requires user confirmation is the possible re-baseline of a relationship. By default, assume-confirmation is false.
- dataset-name-or-id => obj-name-or-id
Name or identifier of the dataset object
- dry-run => boolean, optional
If true, return the dry-run-results list as well as the conformance-alerts list. The dry-run-results list contains actions that would be taken should the changes be committed without actually committing the changes. The conformance-alerts list contains high level alerts to notify a user of conditions that will impact any attempt to commit the changes. A conformance alert may warn that if the changes are committed, one or more rebaseline operations may be done. The conformance alerts may also warn of conditions that exist that may prevent the successful conformance of services. By default, dry-run is false.
- include-dry-run-reason-details => boolean, optional
If true or omitted, then include any possible dry-run-reason-details along with the associated dry-run-result element. Default value is true. If false, the dry-run-reason-details will not be returned.
- operation-type => string
Possible values, "attach", "detach", or "clear". "detach" means: For storage datasets, detach the storage service, but leave the dp policy and resource pools attached to the dataset. For datasets containing application resources or having an application policy, detachment of storage service is not allowed, since these datasets can not have dp policy or resource pools directly attached. So this will error out. "clear" means to detach the storage service, as well as detach the dp policy and resource pools. It can have some destructive side effects such as destroying protection relationships.
- storage-service-node-list => storage-service-node-attributes[], optional
Information about dp policy node mapping. This input is considered only for 'attach' operation to specify the node mapping. Specify only dp-node-name and old-dp-node-name. All other elements of this input will be ignored, if specified.
Outputs
- conformance-alerts => conformance-alert[], optional
Alerts that apply to the conformance check. Each alert describes one type of condition that a user should be aware of before attempting to conform any more services. Only returned if dry-run is true.
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
Creates a dataset with the specified storage serviceInputs
- application-info => application-info, optional
If is-application-data is true, then this element will contain information about the application which manages this dataset.
- assume-confirmation => boolean, optional
Value determining whether confirmation is given for all resolvable conformance actions that require user confirmation. If the value is true, all conformance actions which require user confirmation will be executed as if confirmation is already granted. If the value is false, all conformance actions which require user confirmation will not be executed. Default value is false.
- dataset-contact => email-address-list, optional
Contact for the dataset, such as the owner's e-mail address.
- dataset-description => string, optional
Description of the new dataset, up to 255 characters long.
- dataset-metadata => dfm-metadata-field[], optional
Opaque metadata for dataset. Metadata is usually set and interpreted by an application that is using the dataset. DFM does not look into the contents of the metadata.
- dataset-name => obj-name
Name of the new dataset. It cannot be all numeric. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned.
- dataset-owner => string, optional
Name of the owner of the dataset, up to 255 characters long.
- dry-run => boolean, optional
If true, return the dry-run-results list as well as the conformance-alerts list. The dry-run-results list contains actions that would be taken should the changes be committed without actually committing the changes. The conformance-alerts list contains high level alerts to notify a user of conditions that will impact any attempt to commit the changes. A conformance alert may warn that if the changes are committed, one or more rebaseline operations may be done. The conformance alerts may also warn of conditions that exist that may prevent the successful conformance of services. By default, assume-confirmation is false.
- group-name-or-id => obj-name-or-id, optional
Resource group to which the newly created dataset should be added to. User should have DFM.Dataset.Write capability on the specified group. Default value: Global group.
- include-dry-run-reason-details => boolean, optional
If true or omitted, then include any possible dry-run-reason-details along with the associated dry-run-result element. Default value is true. If false, the dry-run-reason-details will not be returned.
- is-application-data => boolean, optional
If true, the dataset is an application dataset managed by an external application. Default value is false.
- is-suspended => boolean, optional
True if an administrator has chosen to suspend this dataset for all automated actions (data protection and conformance check of the dataset). Default is false.
- online-migration => boolean, optional
Indicates, that the migration cutover has to be non-disrputive. By default the migration will be assumed to be disruptive. This applies only to the vFiler unit to be created or attached to the primary node of the dataset. If provided in input, either dataset-access-details or server-name-or-id should be provided in storage-set-info for the primary node.
- requires-non-disruptive-restore => requires-non-disruptive-restore, optional
Specifies whether the dataset should be configured to enable non-disruptive restores from backup destinations. Default value is false.
- storage-service-name-or-id => obj-name-or-id
Name or object identifier of a storage service object.
- volume-qtree-name-prefix => string, optional
Prefix for volume and qtree names, up to 60 characters long. The allowed characters are a to z A to Z 0 to 9 ' ' (space) . (period) _ (underscore) - (hyphen) If any other characters are included, an error is returned.
Outputs
- conformance-alerts => conformance-alert[], optional
Alerts that apply to the conformance check. Each alert describes one type of condition that a user should be aware of. Only returned if dry-run is true.
- dataset-id => obj-id
Identifier of the newly provisioned dataset.
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
Destroy a storage service. In order to destroy storage service containing datasets, force option must be supplied.Inputs
- force => boolean, optional
If true, allows destroying a storage service that is attached to datasets. By default, only storage services that are not associated with any datasets can be destroyed. This can only destroy storage services that have storage datasets attached. If the datasets attached to the storage services contain application resources or have application policy assigned, then the force option will not work. Because doing so would have destructive implications, the relationships would get destroyed, therefore, users have to manually clear the storage service from those datasets before the storage service can be destroyed.
- storage-service-name-or-id => obj-name-or-id
Name or identifier of the storage service to destroy.
Outputs
Ends iteration to list storage services.Inputs
- tag => string
Tag from a previous storage-service-list-info-iter-start.
Outputs
Get next few records in the iteration started by storage-service-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous storage-service-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list storage services.Inputs
- is-dr-capable => boolean, optional
If true, return only storage services which contain data protection policies, that are disaster recovery capable. If false, return only storage services which contain data protection policies, that are not disaster recovery capable or which do not contain any data protection policy. By default, return all storage services.
- object-name-or-id => obj-name-or-id, optional
Name or identifier of a storage service or resource group. If a resource group is given, only the storage services which are direct members of the group are returned.
Outputs
- records => integer
Number of records that have been saved for future retrieval with storage-service-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to storage-service-list-info-iter-next.
Modify attributes for a storage serviceInputs
- assume-confirmation => boolean, optional
Value determining whether confirmation is given for all resolvable conformance actions that require user confirmation. If the value is true, all conformance actions which require user confirmation will be executed as if confirmation is already granted. If the value is false, all conformance actions which require user confirmation will not be executed. One key, and sometimes undesirable, resolvable action that requires user confirmation is the possible re-baseline of a relationship. By default, assume-confirmation is false.
- dry-run => boolean, optional
If true, return the dry-run-results list as well as the conformance-alerts list. The dry-run-results list contains actions that would be taken should the changes be committed without actually committing the changes. The conformance-alerts list contains high level alerts to notify a user of conditions that will impact any attempt to commit the changes. A conformance alert may warn that if the changes are committed, one or more rebaseline operations may be done. The conformance alerts may also warn of conditions that exist that may prevent the successful conformance of services. By default, dry-run is false.
- include-dry-run-reason-details => boolean, optional
If true or omitted, then include any possible dry-run-reason-details along with the associated dry-run-result element. Default value is true. If false, the dry-run-reason-details will not be returned.
- protection-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the protection policy to associate with this storage service. The data protection license is required for this input.
- storage-service-contact => email-address-list, optional
Contact for the storage service, such as the owner's e-mail address.
- storage-service-description => string, optional
Description of the storage service, up to 255 characters long.
- storage-service-name => obj-name
null
- storage-service-name-or-id => obj-name-or-id
Name or object identifier of a storage service object.
- storage-service-owner => string, optional
Name of the owner of the storage service, up to 255 characters long.
Outputs
- conformance-alerts => conformance-alert[], optional
Alerts that apply to the conformance check. Each alert describes one type of condition that a user should be aware of before attempting to conform any more services. Only returned if dry-run is true.
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
Retrieves the default time zone settings, including the time zone in which the server runs and the time zone to use for objects that don't specify their own time zone.Inputs
Outputs
Ends iteration to list time zones.Inputs
- tag => string
Tag from a previous timezone-list-info-iter-start.
Outputs
Get the next few records in the iteration started by timezone-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous timezone-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list time zones from the internal database of time zone information.Inputs
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with timezone-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to timezone-list-info-iter-next.
Determines if a time zone specification is valid. The specification may be the name of timezone returned from timezone-list-info-iter-next(), or can be a POSIX-style time zone specification.Inputs
Outputs
- utc-offset => integer
Current offset, in seconds, between this time zone and UTC. Range: -43200 to 50400. Negative values are west of UTC; positive values are east of UTC.
Deletes favorite reports of a user or all users.Inputs
- user-name => string, optional
Name of user whose favorite reports are to be deleted. If not specified then favorite reports of all users will be deleted. Domain name\user name is required for windows.
Outputs
The user-favorite-report-list-info-iter-* set of APIs are used to retrieve the list of user favorite counts. user-favorite-report-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the user-favorite-report-list-info-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check user-favorite-report-list-info-iter-start. The user-favorite-report-list-info-iter-next API is used to iterate over the members of user favorite report counts stored in the temporary store created by the user-favorite-report-list-info-iter-start API.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous user-favorite-report-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by user-favorite-report-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
The user-favorite-report-list-info-iter-* set of APIs are used to retrieve the number of favorite reports of specified user or all users. It loads the list of user favorite report counts into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the list in the temporary store. If user-favorite-report-list-info-iter-start is invoked twice, then two distinct temporary stores are created.Inputs
- user-name => string, optional
Name of user. User can be a dfm or non-dfm user. If not specified then all users. Domain name\user name is required for windows.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with user-favorite-report-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to user-favorite-report-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Deletes recently viewed reports of a user or all users.Inputs
- user-name => string, optional
Name of user whose recenlty viewed reports are to be deleted. If not specified then recently viewed reports of all users will be deleted. Domain name\user name is required for windows.
Outputs
The user-recent-report-list-info-iter-* set of APIs are used to retrieve the list of user's recently viewed report counts. user-recent-report-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the user-recent-report-list-info-iter-next API for the particular tag is no longer necessary.Inputs
- tag => string
An internal opaque handle used by the DFM station
Outputs
For more documentation please check user-recent-report-list-info-iter-start. The user-recent-report-list-info-iter-next API is used to iterate over the members of user's recently viewed report counts stored in the temporary store created by the user-recent-report-list-info-iter-start API.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous user-recent-report-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by user-recent-report-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
The user-recent-report-list-info-iter-* set of APIs are used to retrieve the number of recently viewed report count of specified user or all users. It loads the list of user's recently viewed report counts into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the list in the temporary store. If user-recent-report-list-info-iter-start is invoked twice, then two distinct temporary stores are created.Inputs
- user-name => string, optional
Name of user. User can be a dfm or non-dfm user. If not specified then all users. Domain name\user name is required for windows.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with user-recent-report-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to user-recent-report-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Create a new vFiler on a storage system. A vFiler can be created by either: - Specifying the filer on which to create it.
- Speciying a resource pool. In this case, a filer is selected from the resource pool based on required licenses and in-built resource selection algorithm to evenly balance space and load in the resource pool and a vFiler will be created on the selected filer.
Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to create a vFiler.
- EOBJECTNOTFOUND - When the specified resource-name-or-id id not found.
- EFILERNOTFOUND - No storage system could be found on which a vFiler could be created.
- EINVALIDINPUT - Invalid input provided for certain fields.
- EINVALIDMEMBERTYPE - when the filer is in c-mode.
- EMULTISTORENOTLICENSED - When resource-name-or-id corresponds to a filer that does not have multistore licensed.
Inputs
- dry-run => boolean, optional
Check if a vFiler can be created on the given resource-name-or-id. The hosting filer on which the vFiler is created will be returned. If there are any errors, because of which a vFiler cannot be created on the given resource-name-or-id, the API will throw an error. No changes are made to the resource-name-or-id when dry-run is true. Default value is false.
- ip-address => ip-address, optional
IP address of the new vFiler. This element is not required only when the API is run in dry-run mode.
- ipspace => string, optional
IP Space of the new vFiler. Only alphabets, number, hyphens and underscore characters are allowed. It can be a maximum of 64 characters. If not specified, vFiler is created in "default-ipspace".
- name => obj-name
Name of the new vFiler to be created. Only alphabets, number, hyphens and underscore characters are allowed. It can be a maximum of 64 characters.
- resource-name-or-id => obj-name-or-id
Name or identifier of the hosting filer or resource pool from which to provision a new vFiler.
Outputs
- filer-id => obj-id
Identifier of hosting filer on which the vFiler was or will be created in dry-run mode.
Destroy a vfiler. This API stops and then destroys the vFiler on the hosting filer and marks the vFiler as deleted in DFM. Storage resources owned by the vFiler are not destroyed. They will be owned by the hosting filer after the vFiler is destroyed. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to destroy the vFiler.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler is not found.
Inputs
- vfiler-name-or-id => obj-name-or-id
Name of identifier of the vFiler to be destroyed.
Outputs
Configure and setup a vFiler based on a specified vFiler template. Depending on the input a CIFS setup will also be done on the vFiler. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to setup the vFiler.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler is not found.
- EINVALIDINPUT - Invalid input provided.
Inputs
- allowed-protocols => protocol[], optional
List of protocols that needs to be allowed access to the vFiler. This list overrides the list of protocols currently allowed on the vFiler. If this is not present, no changes are done to the allowed protocols of the vFiler.
- cifs-domain-password => string, optional, encrypted
Password for cifs-domain-user. Encrypted using 2-way encryption.
Applicable and mandatory if cifs-auth-type is set to "active_directory" in vfiler template specified in vfiler-template-name-or-id. Default value is empty.
- cifs-domain-user => string, optional
Name of CIFS domain user that has the ability to add the CIFS server to the domain given in cifs-domain-name in vfiler template specified in vfiler-template-name-or-id. Examples: username (assumes domain-name is the user's domain), cifsdomain\username, cifs.domain.com\username. Applicable and mandatory if cifs-auth-type is set to "active_directory" in vfiler template specified in vfiler-template-name-or-id. Default value is empty.
- cifs-workgroup-name => string, optional
CIFS workgroup name of the new vfiler. If vfiler-template-name-or-id is specified, this field is applicable only if cifs-auth-type is "workgroup" in the specified vfiler-template. This field is also applicable when vfiler-template-name-or-id is not given, but cifs setup needs to be done on the vFiler. Default value is "WORKGROUP" and this will be used during CIFS setup of the vFiler if run-cifs-setup is true.
- ip-bindings => ip-binding-info[], optional
IP Address to interface binding information.
- root-password => string, optional, encrypted
Root password of the new vFiler. This will be the password for a new root account that will be created on the vFiler. If a root account already exists on this vFiler, then the password for the root account will not be changed. Encrypted using 2-way encryption.
Length: [0..64] Default value is empty. Ignored if ip-bindings element is not present.
- run-cifs-setup => boolean, optional
Indicates whether CIFS setup should be performed on the vFiler unit. Note that if this is true and CIFS is already running on the vFiler unit, CIFS service will be stopped, and then a setup will be performed. Default value is FALSE.
- script-path => string, optional
Path to a script that will be run in pre setup and post setup mode when setting up the vFiler unit.
- vfiler-name-or-id => obj-name-or-id
Full name or identifier of the vFiler to be setup.
- vfiler-template-name-or-id => obj-name-or-id, optional
Name or identifier of vFiler template. The vFiler is setup based on the settings specified in the template. If this is not specified, then DNS, NIS servers cannot be configured for the vFiler. Also, a CIFS setup cannot be done using active_directory. Ignored if ip-bindings element is not present and run-cifs-setup is false.
Outputs
Creates a new vFiler template by copying all settings from an existing vFiler template. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have privileges to create the vfiler template.
- EVFILERTEMPLATEEXISTS - A vFiler template already exists with this name.
- EOBJECTNOTFOUND - When the specified vFiler template does not exist.
Inputs
- vfiler-template-name-or-id => obj-name-or-id
The name or identifier of an existing vFiler template that is copied to create a new vFiler template.
Outputs
Creates a vFiler template. A vFiler template contains configuration information that is used during vFiler setup. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have privileges to create the vFiler template.
- EVFILERTEMPLATEEXISTS - A vFiler template already exists with this name.
- EINVALIDINPUT - Input validation failed.
Inputs
Outputs
- vfiler-template-id => obj-id
Identifier of the newly created vFiler template.
Deletes the vFiler template. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have capabilities to delete the vFiler template.
- EOBJECTNOTFOUND - When the specified vFiler template does not exist.
Inputs
- vfiler-template-name-or-id => obj-name-or-id
Name or identifier of the vFiler template.
Outputs
Ends iteration of vFiler templates.Inputs
- tag => string
An internal opaque handle used by the DFM station.
Outputs
Get next records in the iteration started by vfiler-template-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous vfiler-template-list-info-iter-start. It's an opaque handle used by the DFM station to identify the temporary store created by vfiler-template-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Lists vFiler templates. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have required capabilities to list vFiler templates.
- EOBJECTAMBIGUOUS - When the specified object name is ambiguous.
- EOBJECTNOTFOUND - When the specified vFiler template does not exist.
Inputs
- vfiler-template-name-or-id => obj-name-or-id, optional
Name or identifier of the vFiler template. If not specified, it lists all vFiler templates.
Outputs
- records => integer
Number which tells you how many vfiler templates are present for future retrieval with vfiler-template-list-info-iter-next. Range: [1..2^31-1]
- tag => string
Tag to be used in subsequent calls to vfiler-template-list-info-iter-next. It is an opaque handle used by the DFM station to identify a temporary store.
Modifies the settings in a vFiler template. Error conditions: - EDATABASEERROR - A database error occurred while processing the request.
- EACCESSDENIED - User does not have capabilties to create the vFiler template.
- EVFILERTEMPLATEEXISTS - A vFiler template already exists with this name.
- EINVALIDINPUT - Input validation failed.
Inputs
Outputs
Ends iteration to list Data Centers.Inputs
- tag => string
Tag from a previous vi-datacenter-list-info-iter-start.
Outputs
Get the list of Data Center records.Inputs
- maximum => integer
The maximum number of records to retrieve. Range: [1..2^31-1].
- tag => string
Tag from a previous vi-datacenter-list-info-iter-start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1].
List Data Centers discovered in DataFabric Manager Server.Inputs
- include-deleted => boolean, optional
If present and true, Data Centers marked as deleted in the database are also returned. Data Centers are marked as deleted if they are destroyed from the Virtual Center Server.
- is-unprotected => boolean, optional
If present and true, members which are unprotected, are returned, which means they are not present in any dataset, having an application policy. If false, or not specified, list all Data Centers.
- obj-name-or-id => obj-name-or-id, optional
Object for which the list of Data Centers need to be retrieved. If the obj-name-or-id is not specified then all the Data Centers are returned. The valid types of object that can be specified here are :- - Virtual Center
- Data Center
- Dataset
- Resource Group
Outputs
- records => integer
Number of entities available in the list. Range: [0..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Ends iteration to list datastoresInputs
- tag => string
Tag from a previous vi-datastore-list-info-iter-start.
Outputs
Get the list of Datastore records.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous vi-datastore-list-info-iter-start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
List Datastores discovered in DataFabric Manager Server.Inputs
- include-deleted => boolean, optional
If present and true, Datastores marked as deleted in the database are also returned. Datastores are marked as deleted if they are destroyed from the ESX Server.
- is-unprotected => boolean, optional
If present and true, members which are unprotected, are returned, which means they are not present in any dataset, having an application policy. If false or not specified, list all Datastores.
- obj-name-or-id => obj-name-or-id, optional
Object for which the list of Datastores needs to be retrieved. If the obj-name-or-id is not specified, then all the Datastores are returned. The valid types of object that be specified here are :- - Virtual Center
- Data Center
- Virtual Machine
- ESX server
-
- Dataset
Outputs
- records => integer
Number of entities available in the list. Range: [0..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Ends iteration to list hypervisors.Inputs
- tag => string
Tag from a previous hypervisor-list-info-iter-start.
Outputs
Get the list of Hypervisor records.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous call to vi-hypervisor-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Start iteration of Hypervisors discovered in DataFabric Manager server.Inputs
- include-deleted => boolean, optional
If present and true, hypervisors which are marked as deleted in the database are also returned. Hypervisors are marked as deleted if they are no longer managed by the Host Service (for example decommissioned Hypervisors).
- obj-name-or-id => obj-name-or-id, optional
Name or identifier of an object for which the list of hypervisors need to be retrieved. The valid type for object are :- - Virtual Center
- Data Center
- Hypervisor
- Datastore
- Virtual Machine
- Dataset
- Resource Group
If the obj-name-or-id is not specified, then all the hypervisors are returned.
Outputs
- records => integer
Number of entities available in the list. Range: [0..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Ends iteration to list Virtual Centers.Inputs
- tag => string
Tag from a previous vi-virtual-center-list-info-iter-start.
Outputs
Get list of Virtual Center records..Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous vi-virtual-center-list-info-iter-start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
List Virtual Center Servers discovered in DataFabric Manager Server.Inputs
- include-deleted => boolean, optional
If present and true, Virtual Centers marked as deleted in the database are also returned. Virtual Centers are marked deleted if they are disassociated from the Host Service at which point the Host Service stops Managing the resources of Virtual Center.
- obj-name-or-id => obj-name-or-id, optional
Name or Id of the Virtual Center Server or a Resource Group. If not specified, then all the Virtual Centers will be returned.
Outputs
- records => integer
Number of entities available in the list. Range: [0..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Ends iteration to list virtual disksInputs
- tag => string
Tag from a previous vi-virtual-disk-list-info-iter-start.
Outputs
Get the list of Virtual Disk records.Inputs
- maximum => integer
The maximum number of records to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous vi-virtual-disk-list-info-iter-start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
List Virtual Disks discovered in DataFabric Manager Server.Inputs
- include-deleted => boolean, optional
If present and true, Virtual Disks marked as deleted in the database are also returned. Virtual Disks are marked deleted if they are delete from the Virtual Machine.
- obj-name-or-id => obj-name-or-id, optional
Name or identifier of an object for which list of Virtual Disks need to be retrieved. The type of object are :- - Virtual Center
- Data Center
- Hypervisor
- Virtual Machine
- Dataset
- Resource Group
If the obj-name-or-id is not specified, all the Virtual Machines discovered are returned.
Outputs
- records => integer
Number of entities available in the list Range: [0..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Ends iteration to list Virtual Machines.Inputs
- tag => string
Tag from a previous vi-virtual-machine-list-info-iter-start.
Outputs
Get the list of virtual machines records.Inputs
- maximum => integer
The maximum number of entries to retrieve. Range: [1..2^31-1]
- tag => string
Tag from a previous call to vi-virtual-machine-list-info-iter-start
Outputs
- records => integer
The number of records actually returned. Range: [1..2^31-1]
Start iteration of Virtual Machines discovered in DataFabric Manager server.Inputs
- include-deleted => boolean, optional
If present and true, Virtual Machines marked as deleted in the database are also returned. Virtual Machines are marked deleted if they no longer exist on the Hypervisor managed by the Host Service.
- is-unprotected => boolean, optional
If present and true, members which are unprotected are returned, which means they are not present in any dataset, having an application policy. If false or not specified, list all virtual machines.
- obj-name-or-id => obj-name-or-id, optional
Name or identifier of an object for which list of Virtual Machines need to be retrieved. If the obj-name-or-id is not specified, all the virtual machines are returned. The possible types are :- - Virtual Center
- Data Center
- Datastore
- HyperVisor
- Dataset
- Resource Group
- virtual-infrastructure-type => virtual-infrastructure-type, optional
Input filter for virtual server infrastructure type. If specified, only the Virtual Machines of the specified virtual server infrastructure type are listed.
Outputs
- records => integer
Number of records available in the list. Range: [0..2^31-1]
- tag => string
Tag to be used for subsequent iteration calls.
Dedupe a volume.Inputs
- volume-name-or-id => obj-name-or-id
Name or identifier of the volume to dedupe.
Outputs
- job-id => integer
Identifier of job started for the deduplication request.
Destroy a volume.Inputs
- volume-name-or-id => obj-name-or-id
Name or identifier of the volume to destroy.
Outputs
- job-id => integer
Identifier of job started for the destroy request.
Ends iteration to list volumes.Inputs
- tag => string
Tag from a previous volume-list-info-iter-start.
Outputs
Get next few records in the iteration started by volume-list-info-iter-start.Inputs
- maximum => integer
The maximum number of entries to retrieve.
- tag => string
Tag from a previous volume-list-info-iter-start.
Outputs
- records => integer
The number of records actually returned.
Starts iteration to list volumes.Inputs
- block-type => file-system-block-type, optional
Filter by file system block type of the volume. If no block-type input is supplied, all types of volumes will be listed.
- include-is-available => boolean, optional
If true, the is-available status is calculated for each volume which may make the call to this zapi take much longer. Default is false.
- include-migration-info => boolean, optional
If true, is-capable-of-migration information for each volume is returned in volume-list-info-iter-next. If is-capable-of-migration information returned for the volume is false, migration-incompatibility-reasons is returned. Default value: false.
- is-direct-member-only => boolean, optional
If true, only return the volumes that are direct members of the specified resource group. Default value is false. This field is meaningful only if a resource group name or id is given for the object-name-or-id field.
- is-dp-ignored => boolean, optional
If true, only list volumes that have been set to be ignored for purposes of data protection. If false, only list volumes that have not been set to be ignored for purposes of data protection. If not specified, list all volumes without taking into account whether they have been ignored or not.
- is-in-dataset => boolean, optional
If true, only list volumes which only contain data which is protected by a dataset. If false, only list volumes containing data which is not protected by a dataset. If not specified, list all volumes whether they are in a dataset or not.
- is-sis-volume => boolean, optional
If true, only list Single Instance Storage (SIS) volumes. A SIS volume is a flexible volume that contains shared blocks. It has the ability to have multiple references to the same data block. Each block has a block reference count stored in the volume metadata. As additional indirect blocks point to it or existing ones stop pointing to it, this value is incremented or decremented accordingly. If false, only list volumes which are not SIS volumes. If not specified, list all volumes whether they are SIS volumes or not.
- is-snapmirror-secondary-capable => boolean, optional
If present and true, only list volumes capable of being a secondary for a SnapMirror relationship. This means the storage system is licensed for SnapMirror and the volume is not already a SnapMirror or SnapVault secondary. Meaningless if present and false.
- is-snapvault-secondary-capable => boolean, optional
If present and true, only list volumes capable of being the destination of SnapVault transfers. This means the storage system is licensed as a SnapVault secondary and the volume is not a SnapMirror destination. Meaningless if present and false.
- is-unprotected => boolean, optional
If true, only list volumes that are not protected, which means they are not in any SnapMirror or SnapVault relationship. If false or not set, list all volumes.
- object-management-filter => object-management-interface, optional
Filter the object based on the Data ONTAP interface that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. If no filter is supplied, all objects will be considered.
- object-name-or-id => string, optional
Name or identifier of an object to list volumes for. The allowed object types for this argument are: - Resource Group
- Dataset
- Storage Set
- Host
- Aggregate
- Volume
- Qtree
- GenericAppObject
If object-name-or-id identifies a volume, that single volume will be returned. If object-name-or-id resolves to more than one volume, all of them will be returned. If no object-name-or-id is provided, all volumes will be listed.
- rbac-operation => string, optional
Names of an RBAC operation. If specified, only return volumes for which the current admin has permission to perform that operation. For example, caller can ask for only volumes that current admin has DFM.Backupmanager.Backup operation privilege to when want to display list of volume that can be backup by the current admin. When not specified, DFM.Database.Read is a default operation requirement.
- volume-type => string, optional
Filter by type of volume. Possible values are: - traditional
- flexible
- striped
If no volume-type input is supplied, all types of volumes will be listed.
Outputs
- records => integer
Number which tells you how many items have been saved for future retrieval with volume-list-info-iter-next.
- tag => string
Tag to be used in subsequent calls to volume-list-info-iter-next.
Modify a volume's information. If modifying of one property fails, nothing will be changed.
Error Conditions: - EACCESSDENIED - When the user does not have DFM.Database.Write capability on the specified volume.
- EINVALIDINPUT - When invalid input specified.
- EOBJECTNOTFOUND - When the volume-name-or-id does not correspond to a volume.
- EDATABASEERROR - On database error.
Inputs
- is-dp-ignored => boolean, optional
True if an administrator has chosen to ignore this object for purposes of data protection.
Outputs
Resize a volume. The size related charecteristics of a volume like total size, snap reserve or maximum size of a volume can be modified using this API.Inputs
- volume-name-or-id => obj-name-or-id
Name or identifier of the volume to resize.
Outputs
- job-id => integer
Identifier of job started for the resize request.
Information about a aggregate.Fields
- aggregate-full-threshold => integer
The value (as an integer percentage) of the fullness threshold used to generate an "aggregate full" event for this aggregate. If the value is empty, then the global setting for aggregate full threshold is considered and this can be obtained from dfm-get-option API with option-name as "aggregateFullThreshold". Range: [0..1000]
- aggregate-name => string
Simple name of the aggregate. Always present in the output. The name is any simple name such as myaggr.
- aggregate-nearly-full-threshold => integer
The value (as an integer percentage) of the fullness threshold used to generate an "aggregate nearly full" event for this aggregate. If the value is empty, then the global setting for aggregate nearly full threshold is considered and this can be obtained from dfm-get-option API with option-name as "aggregateNearlyFullThreshold". Range: [0..1000]
- aggregate-nearly-overcommitted-threshold => integer
The value (as an integer percentage) is used to generate "aggregate nearly overcommitted" event for this aggregate. If the value is empty, then the global setting for aggregate nearly over committed threshold is considered and this can be obtained from dfm-get-option API with option-name as "aggregateNearlyOverCommittedThreshold". Range: [0..65535]
- aggregate-overcommitted-threshold => integer
The value (as an integer percentage) is used to generate "aggregate overcommitted" event for this aggregate. If the value is empty, then the global setting for aggregate over committed threshold is considered and this can be obtained from dfm-get-option API with option-name as "aggregateOverCommittedThreshold". Range: [0..65535]
- aggregate-space-status => object-space-status
Space status of the aggregate. This indicates the fullness of the aggregate in terms of whether the percentage of used space with respect to total size of the aggregate has reached or crossed the fullness thresholds given in aggregate-nearly-full-threshold and aggregate-full-threshold.
- block-type => file-system-block-type
File system block type of the aggregate. The volumes on both the source and destination sides of a SnapMirror relationship must be of the same block type.
- filer-id => integer
Identifier of controller when aggregate-type is traditional or aggregate. Identifier of cluster when aggregate-type is striped. Always present in the output. Range: [1..2^31-1]
- filer-name => string
Name of controller when aggregate-type is traditional or aggregate. Name of cluster when aggregate-type is striped. Always present in the output. The name is any simple name such as myhost.
- is-available => boolean, optional
True if this object and all of it's parents are up or online. Only output if the call to iter-start included the "include-is-available" flag.
- time-to-full => integer
Estimated amount of time left in seconds for the aggregate to become full. This is returned as empty when the estimated amount of time is more than a year. This can happen due to very low or negative rate of consumption of space in the aggregate. Range: [0..31536000]
Sizes of various parameters of an aggregate.Fields
Information about space charecteristics of an aggregate.Fields
Details of backup to be deleted.Fields
Identifier of the backup instance to be deleted. Range: [1..2^31-1]Fields
Indicates when the volumes in the source aggregate should be destroyed after successful migration. Possible values are: - cleanup_after_update
- cleanup_after_migration
- no_cleanup
Fields
Deduplication request detailsFields
Specify the management interface of ONTAP that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. Possible values are: - "node" - For objects manageable by node management interface
- "cluster" - For objects manageable by cluster management interface
Fields
Details of the resize request. Atleast one of new-size, maximum-capacity or snap-reserve needs to be specified.Fields
- maximum-capacity => integer, optional
Specify the new maximum capacity value for a flexbile volume. The value is specified in bytes. Valid only if the storage syste, to which the volume belongs is of ONTAP version 7.1 or above only and if autosize is enabled on the volume. Range: [1..2^44-1]
- snap-reserve => integer, optional
Specifies the percetage of space set aside for snapshots in a volume. The value is specified in percentage of the total size of the volume. Range: [0..100]
Details of snapshot deletion requestFields
- snapshots => snapshot[]
List of Snapshot copies to be deleted from the volume. A minimum of 1 and a maximum of 255 Snapshot copies can be listed for deletion.
Type of space management operation Possible values: - resize_volume
- delete_snapshot
- delete_backup
- migrate_volume
- dedupe_volume
Fields
Details of a space management operation. Only one of the request info in space-management-operation-info should be sent in input.Fields
- dedupe-request-info => dedupe-request-info, optional
Details of deduplication scan to be run on a volume.
- object-name-or-id => obj-name-or-id
Name or id of the volume on which a space management operation needs to be performed.
- resize-volume-request-info => resize-volume-request-info, optional
Details of a resize request to be carried out as part of this space management operation. The resize request should be on a volume in the aggregate on which the space management session was opened.
- space-management-op-type => space-management-op-type
Type of space management operation.
Results of the space management session.Fields
- dry-run-results => dry-run-result[], optional
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action. Only returned if dry-run is true.
- job-id => integer, optional
Identifier of job started for the provisioning request. Returned only when dry-run is false.
- object-id => obj-id
Identifier of the object on which the space management operation is invoked.
- space-management-op-type => space-management-op-type
Type of space management operation.
Details of volume migration requestFields
- destination-aggregate-name-or-id => obj-name-or-id, optional
Name or identifier of the destination aggregate to which all the volumes has to be migrated to. If the destination aggregate is not provided, the system will select a suitable aggregate from the resource pools associated with the dataset node to which the volumes belong to.
- retention-type => dp-backup-retention-type, optional
Retention type to which the backup version created should be archived for the backups created as part of running an on-demand update after successful migration. This element is ignored if run-on-demand-update is false. Default value is "daily".
- run-dedupe-scan => boolean, optional
Indicates whether a full deduplication scan has to be run on the new volume after migration. This option is applicable only for volumes that are enabled for deduplication and is useful to regenerate the fingerprint database used in deduplication and will be ignored for other volumes. Default value is false.
- run-on-demand-update => boolean, optional
Indicates whether an on-demand update has to be triggered after successful migration on the dataset to which the volumes that were migrated belong to. Default value is false.
The name of a DFM administrator to receive alarm notifications.Fields
The default values of the attributes defined by this ZAPI.Fields
- group-id => integer
The default group id for an alarm is 0, representing the "global group". This value is returned.
- is-disabled => boolean
Unless otherwise specified, the is-disabled attribute is set to false when the alarm is created.
- repeat-interval => integer
Unless otherwise specified, the repeat-interval is set to this default minimum number of seconds between repeat notifications. If is-repeat-notify is false, it does not apply. Range: [60..3932100]
Information about a single alarm. This structure is used in three places: creating new alarms, modifying alarms, and listing alarms.Fields
- alarm-id => integer, optional
Identifier of the alarm. Required for list and modify, ignored on create. Range: [1..2^15-1]
- alarm-script-runas => string, optional
Name of user to run alarm script. If not present, the alarm script runs with the same user ID as the DFM event process. This may only be specified as an input if alarm-script is set and may only be specified for DFM stations not running Windows.
- event-class => string, optional
Regular expression specifying the event class of events that trigger this alarm. Only events whose event class match the regular expression will trigger the alarm. The regular expression is of POSIX.1 standard.
- event-severity => obj-status, optional
Minimum event severity to trigger this alarm. If an event's severity is equal to or more severe than this value, the alarm will be triggered. Valid values are only 'normal', 'information', 'unknown', 'warning', 'error', 'critical', 'emergency'.
- group-id => integer, optional
DFM group id which triggers alarm. If the event source object is in this group, the alarm will be triggered. Default is 0 which is the global group. Range: [0..2^31-1]
- group-name => string, optional
DFM group name which triggers alarm. If the event source object is in this group, the alarm will be triggered. This is returned in the output of alarm-list. If group-id element is not present during input, then this element is considered. If both group-id and group-name are not present, then the alarm is configured for global group.
- pager-admin-login-names => admin-name[], optional
Name of DFM administrators who will receive a shortened alarm email. The email messages are formatted to be easy to read on a pager. The alarm email is sent to the pager email address configured for the administrator.
- repeat-interval => integer, optional
Seconds between repeat notifications of the alarm. This will only be returned if repeat-notify is true and may only be specified as an input if is-repeat-notify is set to true. The value is rounded off to the nearest minute. It can have a maximum value of 65535 minutes. Default value is 1800 seconds (30 minutes) Range: [60..3932100]
- time-from => integer, optional
Start time of day when alarm may be triggered, in seconds since Midnight UTC. The value is rounded off to the nearest minute. The default is to always trigger the alarm. Range: [0..86399]
- time-to => integer, optional
End time of day when alarm may be triggered, in seconds since Midnight UTC. The value is rounded off to the nearest minute. The default is to always trigger the alarm. time-to value is generally greater than time-from value. If it is lesser than time-from value, then it is considered as inverted time-range. In this case, if the values of the fields are 20:00 and 2:00, then the alarm is triggered between 0:00 and 2:00 and 20:00 to 23:59. Range: [0..86399]
A single email address. It cannot contain spaces, semi-colons or unprintable characters.Fields
Destination parameters for a SNMP trap.Fields
One API request.Fields
- name => string
API name. The proxy server may have a security policy that restricts the accepted values for this field. Invalid values will cause EACCESSDENIED.
One API response.Fields
Specifies schedules and settings for the backup operation.Fields
One or more application specific operational values, schedules and associated backup retention type created by the schedule.Fields
- retention-type => dp-backup-retention-type
Retention type for the backup created by this schedule. Retention type must match appropriate schedule events. Hourly retention type must have the schedule containing start-time, end-time, frequency and all days for days-of-week. Example schedule: Backup, Sunday through Saturday, every hour between 7 am and 7 pm.
Daily retention type must have the schedule containing start-time and all days for days-of-week. Daily schedule will be executed Sunday through Saturday, once a day at a specified start time. Example schedule: Backup 10 PM every night, Sunday through Saturday.
Weekly retention type must have the schedule containing start-time and days-of-week. Example schedule: Backup every Sunday midnight.
Monthly retention type must have the schedule containing start-time and days-of-month. Example schedule: Backup 10 PM on 1st and 15th day of each month.
Unlimited retention is not supported for scheduled backups. On Demand backup supports 'unlimited' option to retain backups for unlimited duration.
- schedule-id => integer, optional
Identifier of the schedule. Should not be specified when creating a new policy. During the modify, schedule-id must be specified when changing an existing schedule and must be omitted when adding a new schedule.
- start-remote-backup-after-local-backup => boolean, optional
This setting applies to backup schedule only and it is valid for both vmware and hyperv policies. Setting this option to 'true' requires administrator to have DFM.BackupManager.OnDemandRemoteBackup privilege on the application policy. Default value for this option would be 'false'. During policy modification administrator must have DFM.BackupManager.OnDemandRemoteBackup privilege on the application policy to change this setting's original value. For example, if the original version of the policy has this option set to true, changing it to false would require DFM.BackupManager.OnDemandRemoteBackup privilege. Enabling this option would initiate on demand transfer on the local backup to a remote storage system.
Contains information about a single application policy.Fields
- application-policy-description => string, optional
Description of the policy. It may contain from 0 to 255 characters. If the length is greater than 255 characters, the ZAPI fails with error code EINVALIDINPUT. The default value is the empty string "".
- application-policy-name => obj-name
Name of the policy. Each application policy has a name that is unique among provisioning, application and data protection policies. Must be provided while creating a new policy.
- application-policy-type => application-policy-type
Type of an application policy.
- group-name-or-id => obj-name-or-id, optional
Resource group to which the newly created application policy should be added to. User should have DFM.ApplicationPolicy.Write capability on the specified group. This input is used by application-policy-create, but ignored by application-policy-modify. Default value: Global group.
Policy level settingsFields
- backup-script => string, optional
Name of the script to invoke on the Host Services station both before and after the backup. The backup-script consists of 0 to 255 characters. An empty string value "" indicates no script is invoked. The default value of this property is the empty string "".
An empty string value "" indicates no script is invoked. The system does not check whether a non-empty path string actually refers to an executable script prior attempting to run the script. The default value of this property is the empty string "". For example, possible values are %env%\scripts\backup.ps1 OR c:\program..\HS\scripts\backup.ps1 OR k:\program..\HS\scripts\backup.ps1 [k is a network share] OR UNC path \\SCRIPTSSVR\SHARE\SCRIPTS\BACKUP.PS
- daily-retention-count => integer
Minimum number of daily backups to keep. Range: [0..252].
- daily-retention-duration => integer
The age, in seconds, after which a daily backup expires. Range: [0..2^31 - 1].
- hourly-retention-count => integer
Minimum number of hourly backups to keep. Range: [0..252].
- hourly-retention-duration => integer
The age, in seconds, after which an hourly backup expires. Range: [0..2^31 - 1].
- is-lag-error-enabled => boolean
Indicates whether the system should generate an error event when the newest local backup copy is older than lag-error-threshold.
- is-lag-warning-enabled => boolean
Indicates whether the system should generate a warning event when the newest local backup copy is older than lag-warning-threshold.
- monthly-retention-count => integer
Minimum number of monthly backups to keep. Range: [0..252].
- monthly-retention-duration => integer
The age, in seconds, after which a monthly backup expires. Range: [0..2^31 - 1].
- weekly-retention-count => integer
Minimum number of weekly backups to keep. Range: [0..252].
- weekly-retention-duration => integer
The age, in seconds, after which a weekly backup expires. Range: [0..2^31 - 1].
Type of an application policy. Possible values are: 'hyperv', 'vmware'.Fields
Day of month. If day-of-month is 29, 30, or 31, it will be interpreted as the last day of the month for months with fewer than that many days. Range: [1..31]Fields
Day of week. Range: [0..6] (0 = "Sunday")Fields
Schedule level settings for the Hyper-V backup operation.Fields
- allow-saved-state-backup => boolean
If true, the backup is always taken even if the backup operation will cause Virtual Machine to go offline. If false, the backup is taken only if it is possible to do so with the Virtual Machine being online, otherwise the backup is not taken and the operation is marked as failed.
Simple schedule.Fields
Time of day.Fields
- minute => integer
Minute. Range: [0..59]
Times of day on which the scheduled event occurs.Fields
- frequency => integer, optional
Number of minutes between scheduled events. Must be specified when end-time is present. Should not be specified if end-time is not present. Should be a factor of 60 minutes [5,6,10,12 ...] or a multiple of 60 minutes [60,120,240, ...]. Range: [5..1440]
- start-time => time-of-day-info
Start time of the schedule. If end-time is not present, this specifies a single occurrence of a scheduled event. Otherwise, this specifies a series of scheduled events between start-time and end-time at a specified "frequency".
Schedule level settings for the VMware backup operation.Fields
- include-independent-disks => boolean
If false, independent disks are not included in the backup. If true, independent disks are also included in the backup. The term independent disk is vmware specifc, please refer to vmware documentation.
Details of the cifs domain.Fields
Single name/value pair.Fields
Identifier of the comment field. Range: [1..2^31-1]Fields
Information about the comment field.Fields
- comment-field-object-types => comment-field-object-type[], optional
Object type(s) of the comment field. If not specified, then no object type will be associated with the comment field.
- is-system-comment => boolean, optional
Indicates whether this is a system comment field. This field is returned in list output and ignored on create. By default, this is false. System comment fields cannot be modified or destroyed.
Name of the comment field. Name can contain a maximum of 255 characters.Fields
Name or identifier of the comment field. It must conform to one of the following formats: - It must be of the format of comment-field-name
- It must be of the format of comment-field-id
Fields
Object type of the comment field. Name can contain a maximum of 64 characters. Possible input values are 'Dataset', 'DP Policy', 'Prov Policy'.Fields
A single comment field value.Fields
- comment-field-id => comment-field-id
Identifier of the comment field.
- comment-field-name => comment-field-name
Name of the comment field.
- comment-value => comment-value
Value of the comment field
- object-id => obj-id
Identifier of the managed object.
- object-type => string
Type of the managed object. Possible values: - "Host"
- "Volume"
- "Resource Group"
- "Qtree"
- "Interface"
- "Administrator"
- "Network"
- "Mgmt Station"
- "Configuration"
- "QuotaUser"
- "Initiator Group"
- "Lun Path"
- "FCP Target"
- "Directory"
- "HBA"
- "FCP Initiator"
- "SAN Host Cluster"
- "SRM Path"
- "Mirror"
- "Aggregate"
- "Script"
- "Script Schedule"
- "Script Job"
- "Role"
- "Data Set"
- "Storage Set"
- "Resource Pool"
- "DP Policy"
- "DP Schedule"
- "DP Throttle"
- "OSSV Directory"
- "Prov Policy"
- "Schedule"
- "Report Schedule"
- "VFiler Template"
- "Disk"
Value of the specified comment field to be set on the object. This can be of maximum 1024 characters.Fields
Information about an application.Fields
- application-name => string
Name of an application, up to 255 characters long. For example: "SnapManager for Oracle"
- application-server-name => string
The name of the server the application is running on, up to 255 characters long. This is the name of the host server, rather than the name of the client application.
- carry-primary-backup-retention => boolean, optional
If this input is true, retention type of the primary backup is assigned to its replicas on the other nodes of the dataset. An exception is made if the dp-backup-start API with retention-type specified, starts the replication. In that case, the retention type specified in dp-backup-start is assigned to the replicas thereby overriding the retention type of the primary backup. If a scheduled event starts the replication, retention type specified in the schedule event is ignored. If this input is false, retention type of the primary backup is ignored when assigning retention type to its replicas. Depending on how the backup transfer job was started, the retention type specified either in dp-backup-start API or the one specified in the schedule is assigned to the replica of the primary backup.
When this input is false, replicas of the primary backup may get an undesired retention type if the schedules are not configured very carefully. Therefore, it is recommended that any new users of the API should always set this input to true.
Note that even if the retention type of primary and secondary backups is the same, the retention duration may be different for them. The retention duration is specified for each node in the data protection policy. See dp-policy-node-info for more details.
Always present in the output.
Default is true.
- is-application-managing-primary-backup-retention => boolean, optional
If this input is true, Protection Manager does not enforce retention settings in the policy on the primary backups. Application is responsible for deleting primary backups, possibly by invoking dp-backup-version-delete API. If this input is false, Protection Manager deletes the primary backups according to the retention settings specified in the policy.
Always present in the output.
Default is false.
Name of the application resource namespace. We cannot list all possible values for this type. Because, in theory, someone could write a new plugin and install it under a Host Service at run-time, and we will support such a plugin from the server side in future. Client should depend on the specific values only when they are writing plugin-specific code.Fields
Permission granted to a CIFS user on a share. Possible values are 'no_access', 'read', 'change' and 'full_control'Fields
Specifies the properties of a CIFS share on Data ONTPAP system.Fields
CIFS share name of the storage container (volumes, qtrees). Example \\server\sharename.Fields
CIFS ACL information configured for a CIFS share.Fields
A description of one conformance check alert that should be noted by a user.Fields
The type of alert that is being issued is indicated by a field of this type. The alert-type may apply to the conformance-alerts. Possible values are: 'diskspace' and 'rebaseline'.
- diskspace: An alert can be issued for when DFM station disk space is so low as to impact the operation of DFM.
- rebaseline: When the system detects that existing relationships are in need of migration an alert of this type may be issued to warn that a rebaseline may be done should the user confirm conformance for non-conformant datasets.
Please see conformance-alerts where the above values may be used for additional information.Fields
A description of one action and the predicted effects of taking that action.Fields
The data access details of a dataset. Currently, this is used to configure a dataset in a way that it is capable of transparent migration. A dataset can be configured and provisioned in a way that it is capable of transparent migration. IP Address and netmask has to be configured on the dataset in order to make the dataset capable of transparent migration. Provisioning Manager will create a vFiler unit with this IP Address and all the storage provisioned for this dataset will be accessed from this vFiler unit or this IP address. As input, it is valid only when: - There are no members in the dataset. - There is no vFiler unit attached to the dataset.Fields
- ip-address => string
IP address in dotted decimal format. (for example, "192.168.11.12"). The length of this string cannot be more than 16 characters.
- netmask => string
Netmask for the IP Address in dotted decimal notation. As input, this is valid only when ip-address is also provided in dataset-access-details. Provisioning Manager will create a vFiler unit whose IP Address will be the one configured as ip-address and it will be bound to an interface with this netmask.
If storage inside this dataset node is exported over CIFS, this specifies some of the common properties that will be applied to all such provisioned storage.Fields
- cifs-domain => string, optional
Name of the CIFS domain. If present, then only filers or vFilers belonging to this CIFS domain is considered for provisioning. If not present, then any filer is picked. Up to 255 characters.
A single permission granted to a user on CIFS shares within a dataset node.Fields
- cifs-username => string
Name of a user in the specified domain. The username may be the name of an actual domain user or one of the built-in names - 'administrator' or 'everyone'.
Information about one dynamic reference in a dataset.Fields
- dp-node-name => dp-policy-node-name, optional
Name of the policy node to which the dynamic reference is attached. This element is only included if a data protection policy is associated with the dataset.
- is-available => boolean, optional
True if this member and all of it's parents are up or online. Only valid for members of type "ossv-host", "ossv-dir", "filer", "vfiler", "aggregate", "volume", or "qtree". This field is not set for "vm" or "datastore" types.
- is-deleted => boolean, optional
True if this member object is marked as deleted in the database.
Information about one dynamic reference in a dataset.Fields
- dp-node-name => dp-policy-node-name, optional
Name of the data protection policy node associated with the dynamic reference. If dp-node-name is not specified, the root data protection policy node is assumed.
- object-name-or-id => obj-name-or-id
Name or identifier of the dynamic reference to add to or remove from the dataset.
Specifies the NAS or SAN export settings for a dataset node.Fields
- dataset-export-protocol => dataset-export-protocol, optional
Specifies the protocol over which members of the dataset node will be exported. This value is always returned. If omitted during a modify operation, the existing value is not changed. If specified as 'none' in a modify operation, the existing settings will be cleared. Default value is 'none'. Depending on the export protocol, dataset-lun-mapping, or a combination of dataset-nfs-export-setting and dataset-cifs-share-permissions, should be specified in a modify operation, and will be returned in the iterator output.
Specifies the export protocols that members in this dataset node will be exported over. Possible values are 'nfs', 'cifs', 'mixed', 'fcp', 'iscsi' and 'none'. Specifying 'mixed' implies that the storage in the dataset should be exported over both NFS and CIFS.Fields
Information about a dataset.Fields
- application-info => application-info, optional
If is-application-data is true, then this element must be present and it will contain information about the application which manages this dataset.
- application-policy-id => obj-id, optional
Identifier of the application policy associated with this dataset.
- application-resource-namespaces => application-resource-namespace[], optional
This is only returned if can-contain-application-resources is true. If this list is empty, then the dataset can contain any type of application resources. This would be the case where the dataset has no members and no application policy attached. Otherwise, it will list out an array of namespaces from which new members can be added to this dataset.
- can-contain-application-resources => boolean, optional
True, if application resources can be added to this dataset. Otherwise, false. Examples of application resources are virtual machines, datastores etc. An empty dataset without any policy can have both can-contain-application-resources and can-contain-storage-resources set to true at the same time. Always present in the output. Ignored in the input. Default is true.
- can-contain-storage-resources => boolean, optional
True, if storage resources can be added to this dataset. Otherwise, false. Examples of storage resources are hosts, aggregates, volumes, qtrees, OSSV directories etc. An empty dataset without any policy can have both can-contain-application-resources and can-contain-storage-resources set to true at the same time. Always present in the output. Ignored in the input. Default is true.
- dataset-access-details => dataset-access-details, optional
Data access details for a dataset. This is returned only when a dataset is configured to be capable of transparent migration.
- dataset-contact => email-address-list, optional
Contact information for the dataset, such as the owner's e-mail address.
- dataset-export-info => dataset-export-info, optional
Specifies the NAS or SAN export settings for the root node of the dataset. This field is present only if the include-export-settings field was set to true in dataset-list-info-iter-start for this iteration.
- dataset-id => obj-id
Identifier of dataset.
- dataset-metadata => dfm-metadata-field[], optional
Opaque metadata for dataset. Metadata is usually set and interpreted by an application that is using the dataset. DFM does not look into the contents of the metadata.
- dataset-name => obj-name
Name of dataset.
- dataset-owner => string, optional
Name of the owner of the dataset, up to 255 characters long.
- dataset-status => dataset-status
The complex status of the dataset at the time it was last calculated. If suppress-dataset-status was not set to true, then the resource status will be re-calculated as part of the iteration. The resource status is the only aspect of dataset-status that is ever refreshed when iterated.
- dr-state => dr-state, optional
Disaster recovery state of the dataset. If the dataset list iteration's output element is-dr-capable is true, then this element appears in the output. Otherwise, if is-dr-capable is false, this element does not appear in the output.
- has-protection => boolean
True if the dataset satisfies at least one of the following conditions: - A dataset with a single node protection policy assigned.
- A dataset with no protection policy attached but has an application policy attached to it.
- A dataset that has a multi-node protection policy attached to it and has conformed at least once.
- is-allow-custom-settings => boolean, optional
If true, conformance check of some volume options is disabled for the dataset. These volume options include, fractional-reserve, autodelete commitment and autodelete trigger attributes only. This option is applicable only for datasets which have san thin provisioning policy associated. For other datasets the option is ignored. default value is false.
- is-application-data => boolean
If true, the dataset is an application dataset managed by an external application. application-info element is required if this element is true.
- is-dp-ignored => boolean
True if an administrator has chosen to ignore this dataset for purposes of data protection. Data sets with this attribute set to true are not returned when the client requests datasets which are not ignored. This attribute has no other significance to the system.
- is-dp-suspended => boolean
True if an administrator has chosen to suspend data protection of this dataset. Deprecated field. Retained for backward compatibility. This field is deprecated in favour of is-suspended, which suspends the dataset for all automated actions (data protection and conformance check of the dataset).
- is-dr-capable => boolean
True if this dataset has a data protection policy associated with it that is disaster recovery capable.
- is-enable-write-guarantee-checks => boolean, optional
If true, periodic write guarantee checks are enabled for the dataset. This option is applicable only for datasets which have SAN thin provisioning policy associated. Presence of lun-clones, flex-clones, SFSR operations in the volume may effect write guarantees in SAN thin provisioning configurations. Periodic write guarantee checks detect such condition and generate alerts. For other datasets the option is ignored. default value is true.
- is-none-guaranteed-provision => boolean, optional
If true, the volume provisioned for this dataset will be created with none guarantee, and it will override the default behavior that is provided by a NAS provisioning policy. This option is applicable only for datasets which have NAS thin provisioning policy associated. default value is false.
- is-protected => boolean
True if this dataset has a data protection policy associated with it. Deprecated field. Retained for backward compatibility. This field is deprecated and to be replaced by has-protection-policy.
- is-skip-autosize-conformance => boolean, optional
If true, conformance check of autosize volume options is disabled for the dataset. This option is applicable only for datasets which have thin provisioning policy associated. For other datasets the option is ignored. default value is false.
- is-suspended => boolean
True if an administrator has chosen to suspend this dataset for all automated actions (data protection and conformance check of the dataset).
- maximum-luns-per-volume => integer, optional
Dataset level option that specifies the maximum number of luns that can be provisioned in a volume. If the value is empty, then the value of the global option maxLUNsPerVolume is applicable for the dataset.
- maximum-qtrees-per-volume => integer, optional
Dataset level option that specifies the maximum number of qtrees that can be provisioned in a volume. If the value is empty, then the value of the global option maxQtreesPerVolume is applicable for the dataset.
- online-migration => boolean, optional
Indicates, that this dataset is capable of non-disruptive migration. This field is valid only when either dataset-access-details or vfiler-id is not empty. By default the migration will be assumed to be disruptive. Default: false.
- primary-volume-name-format => primary-volume-name-format
Format for primary volume name
- protection-policy-id => obj-id, optional
Identifier of the protection policy governing this dataset.
- requires-non-disruptive-restore => requires-non-disruptive-restore
Specifies whether the dataset is configured to enable non-disruptive restores.
- resourcepool-id => obj-id, optional
Identifier of the resource pool assigned to the root storage set. The output element is omitted if no resouce pool has been assigned to the root storage set. If more than one resource pool is assigned to the dataset node, then the resource pool with the least valued identifier is returned in this field.
This is a legacy parameter and resourcepools field should be used instead to get the list of all resource pools assigned to the dataset node.
- resourcepool-name => obj-name, optional
Name of the resource pool assigned to the root storage set. The output element is omitted if no resource pool has been assigned to the root storage set. If more than one resource pool is assigned to the dataset node, then the name of the resource pool with the least valued identifier is returned in this field.
This is a legacy parameter and resourcepools field should be used instead to get the list of all resource pools assigned to the dataset node.
- secondary-qtree-name-format => secondary-qtree-name-format
Format for secondary qtree name.
- secondary-volume-name-format => secondary-volume-name-format
Format for secondary volume name
- snapshot-name-format => snapshot-name-format
Format for snapshot name
- storageset-id => obj-id
Identifier of the storage set representing the primary data, or "root," of the dataset.
- storageset-name => obj-name
Name of the storage set representing the primary data, or "root," of the dataset.
- storageset-timezone => string
Time zone, if any, of the root storage set. Return an empty string if no time zone has been assigned to the root storage set. Currently valid time zones can be listed by timezone-list-info-iter.
- vfiler-id => obj-id, optional
Identifier of the vFiler unit attached to the primary node of the dataset.
A single I/O usage measurement record for a dataset node.Fields
- is-io-data-partial => boolean, optional
Is set to true if IO information for one or more volume belonging to the dataset node was not available while computing the usage metric. If not present in output, it defaults to false.
- is-overcharge => boolean, optional
Is set to true if the dataset has one or more qtree members while generating the metric. For qtree members of dataset, their containing volume information is used to compute the usage metric. So we flag such metrics as overcharge. If not present in output, it defaults to false.
- timestamp => dp-timestamp
Timestamp of the sample. The value is in seconds since January 1, 1970
Specifies the hosts and/or initiators to which LUNs in this dataset node will be mapped, and the igroup OS type.Fields
- igroup-os-type => string
Specifies the OS type for initiators to map LUNs in the dataset node to. Possible values are: 'windows', 'windows_gpt', 'windows_2008', 'hpux', 'aix, 'linux, 'solaris', 'vmware', 'netware', 'hyper_v', 'xen', 'solaris_efi', and 'openvms' Here "windows_gpt" refers to Windows using GPT (GUID Partition Type) partitioning style and "windows_2008" refers to Windows Longhorn systems.
Map a storage set to a policy node. The root node is not included in the map.Fields
- dataset-access-details => dataset-access-details, optional
Data access details for a dataset. This is returned only when a dataset is configured to be capable of transparent migration.
- dataset-export-info => dataset-export-info, optional
Specifies the NAS or SAN export settings for the storage set. This field is present only if the include-export-settings field was set to true in dataset-list-info-iter-start for this iteration.
- is-dr-capable => boolean
True if this node is the destination of a disaster recovery capable connection in the data protection policy.
- is-vfiler-created-for-migration => boolean, optional
Indicates whether the vFiler unit attached to the dataset node was created by Provisioning Manager This element is present only when vfiler-id element is present. Note: This field will be also be true if vFiler unit was created by Provisioning manager and attached to secondary nodes, although secondary nodes do not support migration.
- provisioning-policy-id => obj-id, optional
Identifier of the provisioning policy associated with the dataset node.
- provisioning-policy-name => obj-name, optional
Name of the provisioning policy associated with the dataset node.
- resourcepool-id => obj-id, optional
Identifier of the resource pool assigned to the storage set that maps to the policy node. The output element is omitted if no resource pool has been assigned to the storage set. If more than one resource pool is assigned to the dataset node, then one of them is returned in this field.
This is a legacy parameter and resourcepools field should be used instead to get the list of all resource pools assigned to the dataset node.
- resourcepool-name => obj-name, optional
Name of the resource pool assigned to the storage set that maps to the policy node. The output element is omitted if no resource pool has been assigned to the storage set. If more than one resource pool is assigned to the dataset node, then the name of the resource pool with the least valued identifier is returned in this field.
This is a legacy parameter and resourcepools field should be used instead to get the list of all resource pools assigned to the dataset node.
- resourcepools => dataset-resourcepool-info[], optional
Name Information of the resource pool(s) assigned to the storage set that maps to the policy node. The output element is omitted if no resource pool(s) has been assigned to the storage set.
- storageset-id => obj-id
Identifier of the storage set that maps to the policy node.
- storageset-name => obj-name
Name of the storage set that maps to the node.
- storageset-timezone => string
Time zone specification, if any, of the storage set. Storage sets without time zones use a time zone from the resource pool the storage came from, or from a default time zone. Return an empty string if no time zone has been assigned to the storage set. Currently valid time zones can be listed by timezone-list-info-iter.
- vfiler-id => obj-id, optional
Identifier of the vFiler unit attached to the dataset node.
- vfiler-name => obj-full-name, optional
Name of the vFiler unit attached to the dataset node.
Information about one member of a dataset.Fields
- dp-node-name => dp-policy-node-name, optional
Display name of the policy node to which the member is attached. This element is only included if a data protection policy is associated with the dataset.
- is-available => boolean, optional
True if this object and all of its parents are up or online. Only valid for members of type "ossv-host," "ossv-dir," "filer," "vfiler," "aggregate," "volume," or "qtree".
- is-deleted => boolean, optional
True if this member object is marked as deleted in the database.
- member-type => string
Type of the member. Possible values for direct members are: "volume", "qtree" or "ossv_directory". When include-exports-info is true in dataset-member-list-info-iter-start, this can also be "lun_path".
- nfs-export-name => string, optional
Name by which the storage container(volume/qtree) has been exported. The volume or qtree can be exported with a different path from its physical path on the filer (with -actual option in the NFS export rule. Example: "/vol/myvol/myqtree" can be exported as "myexport") if not this will be same as full path of the container.
- space-info => space-info, optional
Space related information of a dataset member. Included in the output only if include-space-info was true in the call to dataset-member-list-info-iter-start. Note that some members will not have space related information. Those type of members are: - Non qtree data
- Qtrees without hard disk quota limit.
In these cases, it is considered that the space status of such members is unknown.
- storageset-id => obj-id
Identifier of the storage set to which the member belongs.
- storageset-name => obj-name
Name of the storage set to which the member belongs.
Information about one member of a dataset.Fields
- dp-node-name => dp-policy-node-name, optional
Name of the data protection policy node associated with the object. If dp-node-name is not specified, the object is added to the root data protection policy node.
- object-name-or-id => obj-name-or-id
Identifier of the object to add to or remove from the dataset.
Dataset's migration information.Fields
- description => string, optional
description associated with the migration task
- destination-vfiler-id => obj-id, optional
Database ID of the destination vfiler. If the migration status is 'migrate_failed' and the destination vfiler has been destroyed as a result of rollback activity. this field will not be present.
- provisioning-policy-id => obj-id, optional
Provisioning policy used to provision the migration destination
If storage inside this dataset node is exported over NFS, this specifies some of the common properties that will be applied to all such provisioned storage.Fields
- anonymous-access-user => string, optional
If the client accessing the export is not present in the root access list for the export, the effective user for root is the value specified for this field. Default value is 65534 which maps to user nobody. It could be an integer Range: [0..65534] or a user name not more than 255 characters.
- is-readonly-for-all-hosts => boolean, optional
Specifies that all hosts receive read-only permissions on nfs exports in this dataset node. Equivalent to specifying "ro" on the filer. Default value is false. If true, is-readwrite-for-all-hosts must be false.
- is-readwrite-for-all-hosts => boolean, optional
Specifies that all hosts receive read-write permissions on nfs exports in this dataset node. Equivalent to specifying "rw" on the filer. Default value is false. If true, is-readonly-for-all-hosts must be false.
- nfs-protocol-version => string, optional
NFS protocol version, valid values are "v2", "v3" and "v4". Default value is "v3". If set, the storage systems that have the given version enabled will be selected to provision storage for the dataset.
Information about a dataset node.Fields
- dataset-id => obj-id
ID of the dataset. Range: [1..2^31-1]
- dataset-name => obj-name
Name or the dataset.
- dp-node-id => integer
ID of the node in the data protection policy associated with the dataset. If there is no protection policy assigned to the dataset, then the value 1 is returned. Range:[1..2^31-1]
- dp-node-name => dp-policy-node-name
Name of the node in the data protection policy associated with the dataset. If there is no data protection policy assigned to the dataset, then the name of the dataset is returned as the node name.
Information about resource pools associated with dataset.Fields
A single space usage measurement record for a dataset node.Fields
- is-overcharge => boolean, optional
Is set to true if the dataset has one or more qtree members while generating the metric. For qtree members of dataset, their containing volume information is used to compute the usage metric. So we flag such metrics as overcharge. If not present in output, it defaults to false.
- is-space-data-partial => boolean, optional
Is set to true if space information for one or more volume belonging to the dataset node was not available while computing the usage metric. If not present in output, it defaults to false.
- timestamp => dp-timestamp
Timestamp of the sample. The value is in seconds since January 1, 1970
The complex status of the dataset at the time it was last re-calculated.Fields
- conformance-status => string
A dataset is in conformance if it is configured according to policy. For example, a data protection policy might require provisioned secondary and tertiary volumes and the creation of backup and mirroring relationships. If they are properly configured, then the dataset is conformant. If the dataset is not conformant, but the system is currently in the process of bringing the data set into conformance, then the dataset is conforming. If the system is unable to satisfy one or more policy requirements, then the dataset is nonconformant.
Possible values are: 'conformant', 'conforming', 'nonconformant'.
- dr-protection-status => string, optional
The DR protection status only applies to datasets with data protection policies and only to protection policy connections which are disaster recovery capable. A system may be up and running and properly configured, yet not protecting data. For example, if the lag thresholds specified by the policy are exceeded, then the data is not being sufficiently protected.
Possible values are
- protected : Dataset has complete backup versions on all nodes of the dataset that are to retain backups as per policy. This is a normal condition.
- no_app_policy : Dataset doesn't have an application policy. This is a normal condition.
- uninitialized : Dataset is missing some data protection relationships. This is a warning condition.
- protection_suspended : Dataset has been suspended as administrator requested and is not performing data protection or conformance operations. This is a warning condition.
- protection_failure : Dataset has a job failure while protecting data. This is a warning condition.
- lag_warning : Last successful data transfer on the connection or the most recent protected data on the primary node is older than the lag warning threshold. This is a warning condition.
- lag_error : Last successful data transfer on the connection or the most recent protected data on the primary node is older than the lag error threshold. This is an error condition.
- baseline_failure : Dataset has a baseline job failure and is not able to register any backup version. This is an error condition.
- dr-status => dr-status, optional
The status of the dataset with regard to Disaster Recovery. The DR status determines how successful a DR operation such as a failover likely would be if it were performed on the dataset.
- dr-status-problems => dr-status-problem[], optional
A list of the problems found that cause the data set not to have a dr-status of "normal". This list is not guaranteed to contain all problems related to DR status. If dr-status is present and does not have the value "normal", then this element appears in the output, and the list contains at least one item. Otherwise, it does not appear.
- protection-status => string, optional
A system may be up and running and properly configured, yet not protecting data. For example, if the lag thresholds specified by the policy are exceeded, then the data is not being sufficiently protected.
Possible values are
- protected : Dataset has complete backup versions on all nodes of the dataset that are to retain backups as per policy. This is a normal condition.
- no_app_policy : Dataset doesn't have an application policy. This is a normal condition.
- uninitialized : The dataset either doesn't have any data in it or it has a "local snapshots only" policy (it is a one-node dataset) and the schedule for that policy haven't kicked off any jobs as of yet (or there is no schedule at all). An application dataset could also reach this status after it has been created and no scheduled job is run. This is a warning condition.
- protection_suspended : Dataset has been suspended as administrator requested and is not performing data protection or conformance operations. This is a warning condition.
- protection_failure : Dataset has a job failure while protecting data. This is a warning condition.
- lag_warning : Last successful data transfer on the connection or the most recent protected data on the primary node is older than the lag warning threshold. This is a warning condition.
- lag_error : Last successful data transfer on the connection or the most recent protected data on the primary node is older than the lag error threshold. This is an error condition.
- baseline_failure : Dataset has a baseline job failure and is not able to register any backup version. This is an error condition.
- protection-status-problems => protection-status-problem[], optional
A list of the problems found that cause the dataset not to have a protection-status of "protected". This list is not guaranteed to contain all problems related to protection status. If protection-status does not have the value "protected", then this element will be present in the output as long as the include-protection-status-problems is set to true and only one dataset is specified in the dataset-list-info-iter-start api. Otherwise, it does not appear.
- resource-status => obj-status
The resource status is the "worst" status of the resources relevant to the dataset. Resource status is calculated using the same rules as for DFM resource group status.
- space-status => space-status, optional
Space status of the dataset. Space status of every member of every node of the data set is considered and the worst space status is considered as the space status of the dataset. Space status for members of a dataset node is calculated only if a provisioning policy is associated with that dataset node.
Details of the dedupe member request.Fields
- full-volume-dedupe => boolean, optional
If true, full volume dedupe scan is performed. Otherwise only new data since last deduplication operation is scanned. Default value is false.
- volume-id => obj-id, optional
Identifier of a volume member of a dataset.
- volume-name => obj-name, optional
Name of a volume member of the dataset. Either volume-id or volume-name has to be specified. If volume-id is specified then volume-name is ignored.
Details on which Snapshot copies are to be deleted.Fields
- snapshots => snapshot[]
List of Snapshot copies to be deleted from the volume. A minimum of 1 and a maximum of 255 Snapshot copies can be listed for deletion.
- volume-id => obj-id, optional
Identifier of the volume whose Snapshot copies are to be deleted. This volume should be a member of the effective primary node of the dataset either through direct or indirect membership.
- volume-name => obj-name, optional
Name of the volume whose Snapshot copies are to be deleted. This volume should be a member of the effective primary node of the dataset either through direct or indirect membership. Either volume-id or volume-name has to be specified. If volume-id is specified then volume-name is ignored.
Named field in the metadata.Fields
- field-name => string
Name of the metadata field. Field names are up to 255 characters in length and are case- insensitive.
- field-value => string
Arbitrary, user-defined data expressed as a string. The string is opaque to the server and must not exceed 16384 (16k) characters in length.
Disaster recovery states of a dataset. Possible values are: - "ready": The dataset is prepared to fail over.
- "failing_over": The dataset is in the process of failing over. Eventually, the state of the dataset will change to either "failed_over" or "failover_error".
- "failed_over": The dataset has failed over. The primary data is now considered unavailable and the mirror relationships from the primary to the DR node have been broken.
- "failover_error": The dataset attempted to fail over but encountered an error. User intervention is required to change the state to either "failed_over" or "ready".
Fields
The DR status has one of the following values: - normal : A DR operation would likely succeed, and the amount of lost data would be within the duration specified by the lag warning threshold.
- warning : A DR operation would likely succeed, but the amount of lost data would exceed that specified by the policy's lag warning threshold.
- error : A DR operation would likely fail, or would succeed but the amount of lost data would be unacceptable.
The DR status is computed based on all of the following:
- The resource status of the dataset's DR secondaries. The specific condition that impacts the resource status is the availability of the secondary filers and volumes. The status will be error if either is unavailable.
- Whether there is sufficient space on the DR secondaries to meet the space guarantees of the primaries.
- The existence and state of the relationships associated with the dataset's disaster recovery capable connection.
- Whether the relationships associated with the dataset's DR-capable connection are within their thresholds for lag warnings and errors.
If the dataset list iteration's output element is-dr-capable is true, then this element appears in the output. Otherwise, if is-dr-capable is false, this element does not appear in the output.
Fields
An explanation of a single problem that causes the dataset not to have a dr-status of "normal".Fields
- dr-status-severity => string
Value is "warning" if the problem causes the data set to have a dr-status of at least "warning". Value is "error" if the problem causes the dataset to have a dr-status of at least "error".
A description of one action and the predicted effects of taking that action.Fields
- dry-run-reason => string, optional
Present only if severity is error or warning. Specifies possible reasons for the error, warning. If there are no reasons to specify, this element will not be present.
- dry-run-suggestion => string
Present only if severity is error or warning. Specifies any suggestions to rectify the warnings, errors. If there are no suggestions to specify, this element will not be present.
Information about a datastore in a dataset.Fields
- object-id => obj-id
Identifier of the object to be excluded.
- object-type => string
Type of the object.
For FCP initiators, this is a string composed of 16 hexadecimal digits. For iSCSI initiators, this is an initiator name in the dotted-domain notation.Fields
Dataset node's I/O usage measurements.Fields
Job information of provisioning request.Fields
- job-id => integer
Identifier of job started for the provisioning request.
If the LUN has been mapped to any igroups or hosts, this structure contains information relevant to the same.Fields
- igroup-lun-id => integer
If the LUN has been mapped to any initiator groups, this field contains the ID generated by the storage system for the LUN. Range: [1..2^31-1]
- igroup-os-type => string
Specifies the OS type of the igroup. Possible values: "solaris", "windows_gpt", "windows_2008", "windows", "hpux", "aix", "linux", "netware", "vmware", "hyper_v", "xen", "solaris_efi" and "openvms". Here "windows_gpt" refers to Windows using GPT (GUID Partition Type) partitioning style and "windows_2008" refers to Windows Longhorn systems.
Specifies an initiator to map LUNs in the dataset node to, and, optionally, the host to which it belongs.Fields
Information about the dataset nodeFields
- dataset-id => obj-id
Identifier of dataset.
- dataset-name => obj-name
Name of dataset.
- dp-node-name => dp-policy-node-name, optional
Name of the dataset node e.g. primary, backup, mirror, etc. Can be absent if no protection policy is associated with the dataset.
- protection-policy-name => obj-name, optional
Name of the protection policy associated with the dataset at the time of the metric sample. Can be absent if no protection policy was associated with the dataset.
- provisioning-policy-name => obj-name, optional
Name of the provisioning policy associated with the dataset node at the time of the metric sample. Can be absent if no provisioning policy was associated with the dataset node.
- storage-service-name => obj-name, optional
Name of the storage service associated with the dataset at the time of the metric sample. Can be absent if no storage service was associated with the dataset.
Migration status for an ongoing vFiler unit migration. Possible values: "in_progress", "migrating", "migrated", "migrated_with_errors", "migrate_failed", "rolled_back", "rolled_back_with_errors".Fields
Information about one missing member of a dataset.Fields
- aggregate-id => obj-id
Identifier of the aggregate to which the member belongs.
- aggregate-name => obj-name
Name of the aggregate to which the member belongs.
- cifs-share-names => cifs-share-name[], optional
List of share names for accesing the storage container (volume/qtree) when access protocol of storage set is "CIFS" or "Mixed". Example \\server\sharename.
- member-id => obj-id
Identifier of the missing member.
- member-name => obj-name
Display name of the missing member.
- member-type => string
Type of the missing member. Possible values are 'volume', 'qtree'.
- nfs-export-name => string, optional
Name by which the storage container(volume/qtree) has been exported, when access protocol of storage set is "NFS" or "Mixed". The volume or qtree can be exported with a different path from its physical path on the filer (with -actual option in the NFS export rule. Ex "/vol/myvol/myqtree" can be exported as "myexport") if not this will be same as full path of the container.
Specifies a single host or all-hosts. A host can be a host name (DNS name), an IP address, a subnet or a domain name. If an 'all-hosts' entry is present, all other host name entries must be negated.Fields
- hostname => string, optional
Specifies a single host name entry. A host name may be a DNS name, an IP address, a subnet (specified as <subnet address>/<prefix length> i.e. CIDR notation or '[network] <subnet-address> [netmask] <netmask>'), a domain, or a workgroup.
- is-an-exception => boolean, optional
Specifies that the rule does not apply to this host. Should be used when it is required to grant permissions to hosts in a subnet or a domain or all hosts excluding some. Default value is false.
Specifies the NFS security configuration options provided by Data ONTAP operating system.Fields
- anonymous-access-user => string, optional
If the client accessing the export is not present in the root access list for the export, the effective user for root is the value specified for this field. Default value is 65534 which maps to user nobody. It could be an integer Range: [0..65534] or a user name not more than 255 characters.
- disable-setuid => boolean, optional
If true, causes the server file system to ignore attempts to enable the setuid or setgid mode bits. Default value is true.
- is-readonly-for-all-hosts => boolean, optional
Specifies that all hosts receive read-only permissions on nfs exports in this dataset node. Equivalent to specifying "ro" on the filer. Default value is false. If true, is-readwrite-for-all-hosts must be false.
- is-readwrite-for-all-hosts => boolean, optional
Specifies that all hosts receive read-write permissions on nfs exports in this dataset node. Equivalent to specifying "rw" on the filer. Default value is false. If true, is-readonly-for-all-hosts must be false.
- nfs-export-ro-hosts => nfs-export-host[], optional
A list of hosts with read-only permissions on exports in this dataset node. If a non empty list of nfs-export-ro-hosts is present along with is-readonly-for-all-hosts as true, then ZAPI will throw an error.
- nfs-export-root-hosts => nfs-export-host[], optional
A list of hosts with the root user having read-only or read-write permissions on exports in this dataset node.
- nfs-export-rw-hosts => nfs-export-host[], optional
A list of hosts with read-write permissions on exports in this dataset node. If a non empty list of nfs-export-rw-hosts is present along with is-readwrite-for-all-hosts as true, then ZAPI will throw an error.
- nfs-security-flavors => nfs-security-flavor[], optional
List of strings specifying the security flavors supported on exports in this dataset node. If this field is not specified, the default value of 'sys' is assumed.
A security flavor to be applied on nfs exports. Possible values are 'none', 'sys' (Unix Style), 'krb5' (Kerberos v5) , krb5i' (Kerberos v5 integrity) and 'krb5p' (Kerberos v5 privacy). Default value is 'sys'.Fields
Name format string for the primary volumes created by Provisioning Manager. Allowed values for this element "" (empty string) OR free form text with substitution tokens. The allowed tokens are: - %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
- %D: Dataset name.
For configurable naming using %L and %D, users can use all the attributes or some of them and they can order them anyway they want combined by user defined text e.g. %L_%D. Order of the attributes can change for example. %D_%L.Fields
Details about a job that led to a protection-status-problemFields
Information about a problem that resulted in the dataset to not have protection-status as "protected".Fields
- protection-status-description => string
A human readable description of why protection-status was not found to be "protected". The description may also contain suggestions that may be helpful in resolving the protection status problem.
- protection-status-name => string
A short name for a type of protection-status problem. This name is intended to ease machine readability and is not expected to be displayed to a user. Possible values are
- uninitialized_no_dp_policy : No protection policy is assigned.
- uninitialized_dp_policy_none : The policy named 'No Protection' is assigned.
- uninitialized_no_data_to_protect : The dataset does not contain any data to be protected.
- uninitialized_no_destination_storage : The dataset does not contain storage for one or more destination nodes.
- uninitialized_no_relationships : The dataset has no relationships in one or more connections.
- uninitialized_conforming : The dataset is conforming.
- uninitialized_nonconformant : The dataset is not in a conformant state.
- suspended : Dataset is suspended so no protection schedules will create backup versions in this state.
- uninitialized_single_node_no_backups : Dataset has one node and no backups have finished
- no_backup_version : The multi-node dataset does not have any backup versions.
- uninitialized_application_dataset : Application dataset requires at least one backup version.
- failed_baseline : At least one initial baseline data transfer failed to create a relationship.
- failed_local_backup : Local backups failed on the primary node.
- failed_backup_job : One or more backup jobs has failed.
- failed_mirror_job : One or more mirror copy jobs has failed.
- lag_warning_local_backup : The most recent local backup (Snapshot copy) on the primary node is older than the threshold setting permits.
- lag_warning_since_last_backup : The time since the last successful backup job exceeds the lag warning threshold.
- lag_warning_since_last_mirror : The time since the last successful mirror copy job exceeds the lag warning threshold.
- lag_error_local_backup : The most recent local backup (Snapshot copy) on the primary node is older than the threshold setting permits.
- lag_error_since_last_backup : The time since the last successful backup job exceeds the lag error threshold.
- lag_error_since_last_mirror : The time since the last successful mirror copy job exceeds the lag error threshold.
One comment field to fill in as part of the requestFields
- comment-field-name-or-id => comment-field-name-or-id
Name or identifier of the comment field.
- comment-value => comment-value
Value of the comment field. An empty string can be used to unset the value of the comment field for this object.
Details specific to the 'provision_member' request.Fields
- create-new-vfiler => boolean, optional
True, if new vFiler Unit needs to be created during provisioning. The provisioned storage container is then moved to the vFiler Unit. This input is applicable only in the context of resourcepool-member-list-info-iter-start zapi and run-provisioning-checks is specified as 'true'. This input is ignored in dataset-provision-member and storage-service-dataset-provision zapi. The default value is 'false'.
- dataset-export-info => dataset-export-info, optional
Specifies the NAS or SAN export settings for the the provisioning member. Member level export settings cannot be specified, if export settings have been configured at node level.
- description => string, optional
Description of the provisioning request. The length of this string cannot be more than 255 characters.
- initial-snapshot-space => integer, optional
Applicable when provisioning members in SAN data sets (i.e. datasets associated with provisioning policy of type 'san'). Specify the initial disk space in bytes that needs to be allocated upfront for Snapshot copies of the member. Range: [1..2^63-1]
- maximum-data-size => integer, optional
Maximum storage space for the dataset member. Applicable when provisioning members in NFS data sets (i.e. datasets associated with provisioning policy of type 'nas' and export over 'NFS' protocol). Storage clients can write data up to this limit if the containing aggregate has space. The value is specified in bytes. Range: [1..2^63-1]
- maximum-snapshot-space => integer, optional
Applicable when provisioning members in SAN data sets (i.e. datasets associated with provisioning policy of type 'san'). Specify the maximum disk space in bytes for the Snapshot copies of the member. Once the space consumed by Snapshot copies reaches this value, further Snapshot copies will fail. Range: [1..2^63-1]
- name => string
Name of the storage container. If this is a NAS dataset, i.e the provisioning policy associated with the primary node of the dataset is of type 'nas', then this name corresponds to the qtree or volume created. If this is a SAN dataset, i.e the provisioning policy associated with the primary node of the dataset is of type 'san', then this name corresponds to the LUN or volume that will be provisioned as part of this request. The length of this string cannot be more than 255 characters.
- online-migration => boolean, optional
Indicates whether pre checks related to online migration has to be performed on the storage system during provisioning. This input is applicable only in the context of resourcepool-member-list-info-iter-start API and run-provisioning-checks is specified as true and either create-new-vfiler is true or vfiler-name-or-id is specified. This input is ignored in storage-service-dataset-provision and dataset-provision-member APIs.
- provisioning-policy-id => obj-id, optional
Name or identifier of the provisioning policy used with this request. Ignored for dataset-provision-member and storage-service-dataset-provision and is used only when listing provisioning requests and jobs.
- provisioning-policy-name => obj-full-name, optional
Name of the provisioning policy used with this request. Present only if the request type is 'provision_member'. Ignored for dataset-provision-member and storage-service-dataset-provision and is used only when listing provisioning requests and jobs.
- resource-name-or-id => obj-name-or-id, optional
Name or identifier of the resource object from which storage space will be provisioned. valid types of objects are:- The filer or aggregate specified should belong to the resource pool(s) associated with primary node of the dataset.
- size => integer
Amount of storage space for data. Whether this space has to allocated on demand or reserved upfront from an aggregate in resource pool is decided based on the provisioning policy settings. The value is specified in bytes. Range: [1..2^63-1]
- vfiler-name-or-id => obj-name-or-id, optional
Name or id of the vFiler Unit, if specified the newly provisioned dataset member will be moved to the specified vFiler Unit and accessed via the vFiler Unit's IP addresses. Ignored for storage-service-dataset-provision.
A description of one replace action.Fields
- dp-node-name => dp-policy-node-name, optional
Name of the data protection policy node associated with the secondary volume or qtree.
- member-type => string
Type of the object replaced. Possible values are "volume", or "qtree".
- replace-status => string
Possible values are "not_found", "not_required", or "replaced". "not_found" is returned if a replacement primary was not found. "not_required" is returned if the primary did not need to be replaced. "replaced" is returned if the primary was successfully replaced.
When this attribute is set, the dataset is configured to allow LUNs to be restored from backups in such a way that the host need not detach from the LUN. Configuring the dataset for non-disruptive restore does not guarantee that all backup instances may be restored non-disruptively. It only applies to backup instances reached through a Backup connection. The caller must check the supports-non-disruptive-restore output element returned by dp-backup-list-info-next to tell whether a given backup instance can be restored non-disruptively.
Specifically, when this attribute is set:
- All storage systems must be running ONTAP 7.3 or later.
- Protection Manager must have working ZAPI credentials for all storage systems in use by this dataset.
- All Backup connections will be implemented using SnapVault, not Qtree SnapMirror.
- All non-primary storage systems used by the data set must have a SnapVault Secondary license.
If any of these constraints is not met when adding a member to the dataset, the dataset will not be in conformance and the addition will generate conformance errors. Addition of members with ONTAP version less than 7.3 to the primary node of the dataset will fail with errors.
requires-nondisruptive-restore may only be specified if:
- No Backup connection in the protection policy used by the dataset is DR capable (i.e. the is-dr-capable attribute may not be set for any Backup connection of the policy).
- The dataset must be an application dataset.
Thus, if requires-non-disruptive-restore is specified as true, is-application-data must also be present and true. Callers may check the non-disruptive-restore-compatible output element of the dp-policy-list-info-iter-next API to tell which policies may be applied to datasets requiring non-disruptive restores.Fields
Details of the resize request.Fields
- maximum-capacity => integer, optional
Specify the new maximum capacity value for a flexbile volume. The value is specified in bytes. Valid only if the filer to which the volume belongs is of ONTAP version 7.1 or above only. This input is valid only if member-id or member-name refers to a flexible volume. Range: [1..2^63-1]
- member-id => obj-id, optional
Identifier of the dataset member on which the resize operation has to be carried out. This member can be either a flexible volume or a qtree which is a member of the effective primary node of the dataset either through direct or indirect membership.
- member-name => obj-name, optional
Name of the dataset member on which the resize operation has to be carried out. This member can be either a flexible volume or a qtree which is a member of the effective primary node of the dataset either through direct or indirect membership. Either member-id or member-name has to be specified. If member-id is specified then member-name is ignored.
- new-size => integer, optional
Specify the new size of the dataset member. The value is specified in bytes. If the member is a volume, this corresponds to the size of the volume. If the member is a qtree, this corresponds to hard disk quota limit for the qtree. Range: [1..2^63-1]
- snap-reserve => integer, optional
Specifies the percetage of space set aside for snapshots in a volume. The value is specified in percentage of the total size of the volume. This input is valid only if member-id or member-name refers to a flexible volume. Range: [0..100]
Details on a specific action that adds information intended to explain more about a higher level result. For example, the detail may be used to explain a dry-run result by explaining why resources were not selected.Fields
- reason => string, optional
Specifies possible reasons for the this result. If there are no reasons to specify, this element will not be present.
- severity => obj-status
severity of the result result. Possible values are "information", "error" "warning". It is normal for "information" to be used to explain why a resource was not selected.
- suggestion => string
Specifies any suggestions to rectify the information, warnings, errors. If there are no suggestions to specify, this element will not be present.
Name format string for the secondary qtrees created by Protection Manager. This string is free form text with substitution tokens. The allowed substitution tokens are: - %Q: Primary qtree name.
- %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
- %S: Primary storage system name. The name of the storage system of the source volume in the primary root node to which the current secondary volume has a relationship with.
- %V Primary volume name. It is the name of the source volume in the primary root node to which the current secondary volume has a relationship with.
Users can use any or all of the substitution tokens in any order.Fields
Format string used to generate names for secondary volumes created by Protection Manager. Allowed values for this element "" (empty string) OR free form text with substitution tokens. The allowed tokens are - %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
- %S: Primary storage system name. The name of the storage system of the source volume in the primary root node to which the current secondary volume has a relationship with.
- %V: Primary volume name. It is the name of the source volume in the primary root node to which the current secondary volume has a relationship with.
- %C: Type of connection. It has only two values, "mirror" or "backup"
For configurable naming using %L, %C, %S and %V, users can use all the attributes or some of them and they can order them anyway they want combined by user defined text e.g. %L_%C_%S_%V. Order of the attributes can change for example. %C_%S_%L.Fields
Format string for creating snapshot names. This string is free form text with substitution tokens. Allowed substitution tokens are: - %T: Scheduled creation time of snapshot in format yyyy-mm-dd_hh.mm.ss. The time is in the dataset time zone and if no time zone is assigned to the dataset, then default system time zone will be used. is the offset from utc in standard format (+hhmm or ?hhmm). For example, "-0430" means 4 hours 30 minutes behind UTC (west of Greenwich). If the timezone is "UTC" or "GMT", no offset is included in the name.
- %R: Retention type of snapshot. Values are "hourly", "daily", "weekly" "monthly" or "unlimited".
- %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
- %H: Name of storage system which owns the volume which will contain the snapshot.
- %N: Name of volume which will contain the snapshot.
- %A: Application-specific data. When included, allow the utility, creating the snapshot name for the application, to embed any application specific data that the utility wants in the snapshot name. User has no control over what goes in this field.
Format string can include any or all substitution tokens in any order. %T is mandatory.Fields
Represents a space condition of a dataset member.Fields
- condition => string
Space condition. Possible values: - "volume-used-vs-size-above-nearly-full" Used space of the volume vs volume usable data space has crossed nearly full threshold specified in the policy
- "volume-used-vs-size-above-full" Used space of the volume vs volume usable data space has crossed full threshold specified in the policy
- "volume-used-vs-maximum-size-above-nearly-full" Used space of the volume vs volume usable maximum data space has crossed nearly full threshold specified in the policy
- "volume-used-vs-maximum-size-above-full" Used space of the volume vs volume usable maximum data space has crossed full threshold specified in the policy
- "qtree-used-vs-quota-above-nearly-full" Qtree used space vs qtree hard limit has crossed nearly full threhsold specified in policy
- "qtree-used-vs-quota-above-full" Qtree used space vs qtree hard limit has crossed full threhsold specified in policy
- "volume-snapshot-used-above-full-threshold" Snapshot used space in the volume has crossed volSnapshotFullThreshold specified in Operations Manager.
- "volume-no-next-snapshot" Next snapshot is not possible in the volume due to space contraints
- reason => string, optional
Reason for this space condition. This can be a maximum of 1024 characters.
Specifies the total, available and used space in a dataset member, the space status and various space related conditions of the member.Fields
- space-status => space-status, optional
Space status of the member.
- total-space => integer, optional
Specifies the total capacity of the dataset member, in bytes. Range: [1..2^63-1]
Dataset node's space usage measurements.Fields
- guaranteed-space => integer
Physical space allocated for data and snapshot in bytes. This is the space used by the dataset node from the aggregate. Range: [0..2^63-1]
- total-space => integer, optional
Total space allocated for data and snapshot in bytes. This element will be present only for dataset's primary node. Range: [0..2^63-1]
Worst space status of a dataset or a member. Possible values are: 'unknown', 'ok', 'warning', 'error'Fields
Details of the undedupe member request.Fields
- volume-id => obj-id, optional
Identifier of a volume member of a dataset.
- volume-name => obj-name, optional
Name of a volume member of the dataset. Either volume-id or volume-name has to be specified. If volume-id is specified then volume-name is ignored.
Statistics about a single server APIFields
- cpu-max-time => integer
Maximum CPU time spent during the API execution Range:[0..2^63-1]Units: Wall-clock milliseconds
- cpu-min-time => integer
Minimum CPU time spent during the API execution Range:[0..2^63-1]Units: Wall-clock milliseconds
- db-max-time => integer
Maximum database time spent during the API execution Range:[0..2^63-1]Units: Wall-clock milliseconds
- db-mean-time => integer
Mean database time spent during the API execution Range:[0..2^63-1]Units: Wall-clock milliseconds
- db-min-time => integer
Minimum database time spent during the API execution Range:[0..2^63-1]Units: Wall-clock milliseconds
- max-time => integer
Maximum execution time for the API. Range:[0..2^63-1]Units: Wall-clock milliseconds
- mean-time => integer
Mean execution time for the API. Range:[0..2^63-1]Units: Wall-clock milliseconds
- min-time => integer
Minimum execution time for the API. Range:[0..2^63-1]Units: Wall-clock milliseconds
Type of DFM database backup. Possible values are 'archive' and 'snapshot'. The 'archive' type backups are used when the data resides on a conventional disk. The 'snapshot' type backups are used for data residing on a LUN.Fields
Count of children of one type for the specified object. An element of this type is returned if count is 1 or more.Fields
- type => string
Object type of the child object. Valid values are: 'vfiler', 'aggregate' 'volume', 'qtree', 'lun' and 'disk'.
Descriptive diagnostic information about a counter group.Fields
- group-name => string
The name of the counter group.
- host-name => string
The name of the monitored host.
The timestamp of last monitoring done for a monitor on various objects of DFM.Fields
- monitor-name => monitor-name
Name of the DFM monitor. Applicable monitors for Host Agent are - "agent" - refreshes agent information
- "ping" - refreshes ping status
- "san" - refreshes SAN information
- "sysinfo" - refreshes system information
- "srm" - refreshes SRM information
Applicable monitors for OSSV Host are - "agent" - refreshes agent information
- "snapmirror" - refreshes snap mirror information
- "snapvault" - refreshes snap vault information
- "sysinfo" - refreshes system information
Applicable monitors for vFiler unit are - "config_conformance" - checks a storage system's configuration for conformance
- "cpu" - refreshes cpu related information
- "disk_free_space" - refreshes disk free space
- "file_system" - refreshes file system information
- "lun" - collects lun information
- "ping" - refreshes ping status
- "qtree" - refreshes qtree data using snmp
- "qtree_xml" - refreshes qtree data using zapi
- "rbac" - refreshes RBAC information
- "snapmirror" - refreshes snap mirror information
- "snapshot" - refreshes snap shot information
- "snapvault" - refreshes snap vault information
- "sysinfo" - refreshes system information
- "vfiler" - refreshes vfiler information
- "userquota" - refreshes user quota information
Applicable monitors for Data ONTAP 7-Mode storage system are - "config_conformance" - checks a storage system's configuration for conformance
- "cpu" - refreshes cpu related information
- "cluster_failover" - refreshes cluster fail over information
- "disk_free_space" - refreshes disk free space of a host
- "disk_status" - refreshes disk status of a Host
- "env" - refreshes environmentals of a Host
- "fibre_channel" - refreshes fibre channel information
- "file_system" - refreshes file system information of Host
- "interface_status" - refreshes network related information of a Host
- "license" - collects license information from Hosts
- "lun" - collects lun information from Host
- "ops" - refreshes file system operation count of a Host
- "ping" - refreshes ping status of a Host
- "qtree" - refreshes qtree data using snmp
- "qtree_xml" - refreshes qtree data using zapi
- "rbac" - refreshes RBAC information
- "snapmirror" - refreshes snap mirror information
- "snapshot" - refreshes snap shot information
- "snapvault" - refreshes snap vault information
- "status" - refreshes global status of a Host
- "sysinfo" - refreshes system information of a Host
- "vfiler" - refreshes vfiler information of a hosting storage system
- "userquota" - refreshes user quota information of a host
Applicable monitors for Data ONTAP Cluster-Mode storage system are - "cpu" - refreshes cpu related information
- "cluster" - refreshes cluster information
- "disk_free_space" - refreshes disk free space of a host
- "env" - refreshes environmentals of a Host
- "file_system" - refreshes file system information of Host
- "interface_status" - refreshes network related information of a Host
- "license" - collects license information from Hosts
- "ping" - refreshes ping status of a Host
- "snapshot" - refreshes snap shot information
- "sysinfo" - refreshes system information of a Host
- "vserver" - refreshes vserver information of a Host
Lists the information about DFM directories. These directories include Performance Advisor data directory, Database Backup directory, Data Export directory and Reports Archival directory.Fields
- disk-free-space => integer, optional
Free space for the file system containing the directory (in bytes). This field is returned only when include-directory-size-info is true in input. Range: [0..2^63-1].
- file-count => integer, optional
Number of files in the directory. This field is returned only for Performance Data directory. Range:[0..2^31-1] This field is not returned if the file count of the directory is unknown.
- name => string
Name of the directory. For example, "opt/NTAPdfm/perfdata".
- size => integer, optional
Size of the directory in bytes. Range:[0..2^63-1] This field is not returned if the size is unknown.
- type => string
Type of the DFM directory. Possible values: "install", "performance_data", "database_backup", "reports_archive" and "data_export".
A list of email addresses. If more than one email address is specified, the addresses must be seperated by a ','. Spaces (and other white space) are not allowed, either between addresses or in them. Unprintable characters are also invalid. The list may contain up to 255 characters.Fields
Information about a licensed feature.Fields
- name => string
name of the feature
- summary => string
Information about the license for the feature such as status, type and expiration date.
Block Type of the file system. The volumes on both the source and destination sides of a SnapMirror relationship must be of the same block type. Volumes contained in a larger parent agregate may have a block-type of 64_bit. For upgraded systems it is possible that this value may be unknown until the system can determine the block-type. Possible values are: Fields
Identifier generated by the host service that is unique within the host service.Fields
IP address in string format. The length of this string cannot be more than 40 characters.Fields
The key/value for a generic object attribute.Fields
- value => string
Value of the generic object attribute
Specifies name of a monitor to be scheduled to run. Possible values are as follows: - "agent" - refreshes agent information
- "cache" - refreshes net cache information
- "cluster_failover" - refreshes cluster fail over information
- "config_conformance" - checks a storage system's configuration for conformance
- "connectivity" - checks connectivity of a given Host
- "cpu" - refreshes cpu related information
- "disk_free_space" - refreshes disk free space of a Host
- "disk_status" - refreshes disk status of a Host
- "env" - refreshes environmentals of a Host
- "fibre_channel" - refreshes fibre channel information
- "file_system" - refreshes file system information of Host
- "interface_status" - refreshes network related information of a Host
- "license" - collects license information from Hosts
- "lun" - collects lun information from Host
- "ndmp" - refreshes ndmp ping timestamp
- "ndmp_credentials" - refreshes ndmp credentials timestamp
- "ops" - refreshes file system operation count of a Host
- "ping" - refreshes ping status of a Host
- "qtree" - refreshes qtree data using snmp
- "qtree_xml" - refreshes qtree data using zapi
- "san" - refresh SAN information of a Host Agent
- "share" - refreshes shares information of a Host
- "snapmirror" - refreshes snap mirror information
- "snapshot" - refreshes snap shot information
- "snapvault" - refreshes snap vault information
- "snapvault_dir" - refreshes directories on OSSV Hosts; monitor applicable only for OSSV Hosts
- "srm" - refresh SRM information of a Host Agent
- "status" - refreshes global status of a Host
- "sysinfo" - refreshes system information of a Host
- "userquota" - refreshes user quota information of a host
- "vfiler" - refreshes vfiler information of a hosting storage system
Fields
IP address of the network or hostFields
SNMP credential information of a network or host.Fields
- host-name => string, optional
Fully qualified name of the host. This will be returned only for hosts, not for networks.
- is-snmp-privacy-enabled => boolean
This will be returned only if the snmp-version is snmp_v3. It indicates if the privacy-password was set when adding snmp credentials for network.
- prefix-length => integer
It represents the prefix length. This should be 32 for IPv4 and 128 for IPv6 if the network-address is a host address.
- snmp-id => snmp-id
A unique identifier representing the SNMP credential setting in dfm. This id was generated when snmp setting was added.
Full name of a DFM object. This typedef is an alias for the builtin ZAPI type string. An object full name conforms to all the rules of an obj-name, except that the full name may be up to 255 characters long. DFM creates full names by concatenating an object name with any parent object names, so as to create a unique name for an object. The format of full names is as follows:
- Host full names are the either the fully-qualified domain name or the IP address of the host.
- Aggregate full names are the host name and the aggregate name, separated by a colon, e.g. hostname:aggr0.
- Volume full names are the host name and the volume name, separated by ":/", e.g. hostname:/volume. Note this does not include "/vol". Volume and aggregate full names are distinguished by the presence of a forward slash after the colon.
- Qtree full names are the containing volume full name and the qtree name, separated by a slash, e.g. hostname:/volume/qtree. The data not contained by any qtree may be represented by "-", e.g. hostname:/volume/-.
- Lun Path full names are either a volume or qtree full name and the LUN path, separated by a slash, e.g. hostname:/volume/LUN or hostname:/volume/qtree/LUN.
- Network full names are a network address block in CIDR format, e.g. 1.2.3.0/8.
- OSSV Directory full names are the OSSV host name and the OSSV path, separated by a colon, e.g. host-lnx:/usr/local or host-w2k:c:/temp
- Include any others here...
- Initiator Group full names are host name and the initiator group name, separated by a colon, e.g. hostname:igroup.
For any DFM object not listed above, the obj-name and obj-full-name are identical.
Fields
Identification number (ID) for a DFM object. This typedef is an alias for the builtin ZAPI type integer. Object IDs are unsigned integers in the range [1..2^31 - 1]. In some contexts, an object ID is also allowed to be 0, which is interpreted as a null value, e.g., a reference to no object at all. The ID for a DFM object is always assigned by the system; the user is never allowed to assign an ID to an object. Therefore, an input element of type obj-id is always used to refer to an existing object by its ID. The ZAPI must specify the object's DFM object type (e.g. dataset, host, DP policy, etc.). Some ZAPIs allow the object to be one of several different types.
If the value of an obj-id input element does not match the ID of any existing DFM object of the specified type or types, then typically the ZAPI fails with error code EOBJECTNOTFOUND. A ZAPI may deviate from this general rule, for example, it may return a more specific error code. In either case, the ZAPI specification must document its behavior.
Fields
Name of a DFM object. This typedef is an alias for the built in ZAPI type string. An object name must conform to the following format: - It must contain between 1 and 64 characters.
- It may start with any character and may contain any combination of characters, except that it may not consist solely of decimal digits ('0' through '9').
- In some contexts, a name may be the empty string (""), which is interpreted as a null value, e.g., a reference to no object at all.
The behavior of a ZAPI when it encounters an error involving an obj-name input element depends on how the ZAPI uses the input element. Here are the general rules: - If the input name element is used to create a new object with the given name, or rename an existing object to that name, and the name does not conform to the above format, then the ZAPI fails with error code EINVALIDINPUTERROR. Note that because EINVALIDINPUTERROR is such a common error code, ZAPI specifications are not required to document cases when they may return it.
- If the input name element is used to refer to an existing object with that name, and there is no object with that name, then the ZAPI fails with error code EOBJECTNOTFOUND. Generally the ZAPI specification documents cases when it may return this error code.
A ZAPI may deviate from these general rules, for example, it may return more specific error codes. In such cases, the ZAPI specification must document its behavior. If an input name element is used to refer to an existing object, then the ZAPI specification must specify which DFM object type (e.g. data set, host, DP policy, etc.) is allowed. Some ZAPIs allow the object to be one of several different types. See the description of obj-full-name for examples of valid input formats.
Note that there is no requirement that all object names must be unique. However, the names for some specific types of objects are constrained such that no two objects of that type may have the same name. For example, this constraint applies to datasets, DP schedules, and DP policies. This means that no two datasets may have the same name, but a dataset may have the same name as a DP schedule or DP policy.
In general, object names are compared in a case-insensitive manner. This means that, for example, "MyObject" and "MYOBJECT" are considered to be the same name for purposes of: creating new objects, renaming existing objects, or looking up an object by name. On the other hand, ZAPIs that return an obj-name generally do not change the capitalization at all. For example, if an object's name has been set to "MyObject", then list iteration ZAPIs that return the object's name return it as "MyObject" rather than "MYOBJECT" or "myobject".
ZAPIs that operate on obj-name values and do not follow these general rules about case sensitivity must document the rules that they do follow.
One important exception to these general rules is that volumes, qtrees, OSSV directories, SRM paths, interfaces, FCP targets and FC switch ports all have case-sensitive names. When looking up objects of these types by name, the case must match the object name.
Fields
Name or internal ID of a DFM object. This typedef is an alias for the builtin ZAPI type string. An obj-name-or-id must contain between 1 and 64 characters, and must conform to one of the following formats: - It must have the format of an obj-name, or
- It must be the decimal numeric string form of a positive integer whose value is in the range [1..2^31 - 1].
- In case of application resources from the Host Service, this field can contain unique identifier assigned to the object by the Host Service e.g. for a Virtual Machine, it can be a GUID of the VM. One exception is when such unique identifier is a decimal numeric string containing only digits from 0 through 9. In that case, you cannot use such identifier as obj-name-or-id input.
Elements of type obj-name-or-id are used only as inputs to ZAPIs. The value must match either the name or internal ID of an existing DFM object. The ZAPI must specify the object's DFM object type (e.g. dataset, host, DP policy, etc.). Some ZAPIs allow the object to be one of several different types. If the format of an obj-name-or-id input element does not conform, or the value does not match the name or ID of an existing object, then generally the ZAPI documents that it fails with error code EOBJECTNOTFOUND. A ZAPI may return more specific error codes. In such cases, the ZAPI specification must document its behavior.
If a ZAPI can accept a null value (e.g. reference to no object at all) for such an element, then the element is declared optional, and the absence of the input element represents a null value.
Fields
A status value which can be associated with a DFM object. This typedef is an alias for the builtin ZAPI type string. The severity associated with an event has this type. Possible values are: 'unknown', 'normal', 'information', 'unmanaged' 'warning', 'error', 'critical', 'emergency'.
- unknown: An object has an unknown status when it transitions from one state to another. Ideally, an object will have this status briefly. For example, when an object has been added, but not yet discovered.
- normal: An object has normal status when it is working within the thresholds specified in DFM.
- information: The information events are normal occurrences on an object for which you can define alarms.
- unmanaged: An object is considered to be unmanaged when the login and password are not set for the storage system or agent.
- warning: An object has the warning status when an event related to the object occurred that an administrator should know about. The event will not cause service disruption.
- error: An object has error status when it does not cause any service disruption, but it may affect performance.
- critical: An object has critical status when it is still performing, but service disruption may occur if corrective action is not taken immediately.
- emergency: An object is in emergency status when it stops performing unexpectedly and could lose data.
In some contexts, it is important that severities are ordered (as above). For example, an alarm might be triggered if an event with a given severity "or worse" occurs. In this example, worse means "after" in the list above.Fields
Status of the dfm objectFields
- object-name-or-id => obj-name-or-id
Identifier of the dfm object. This corresponds to an entry in the input "objects" array.
- object-status => obj-status
Status for the object based on all events If the object is not present or is ambiguous, the status is set to "Unknown"
Information about one dataset. A parent dataset is returned if the authenticated user has DFM.Database.Read privilege on the dataset.Fields
- name => obj-name
Name of the dataset
Information about one resource group. A parent group is returned if the user has DFM.Database.Read privilege on the group.Fields
- id => obj-id
Identifier of the group
- name => obj-name
Name of the group
Information about one parent object. A parent object is returned if the authenticated user has DFM.Database.Read privilege on that object.Fields
- full-name => obj-full-name
Full name of the parent
- id => obj-id
Identifier of the parent
- name => obj-name
Name of the parent
- type => string
Object type of the parent object. Valid values are: 'filer', 'vfiler', 'aggregate' 'volume' and 'qtree'.
Information about one resource pool. A parent resource pool is returned if the authenticated user has DFM.Database.Read privilege on that resource pool.Fields
- id => obj-id
Identifier of the resource pool
- name => obj-name
Name of the resource pool
Installed plugins.Fields
- plugin-type => string, optional
type of the plugin. The possible types are, 'filer-config' for storage system plugins, 'netcache-config' for NetCache plugins and 'unknown' for unknown type plugins.
Prefix length of the network or hostFields
Provides information on resource properties and the possible valuesFields
- values => string
A comma seperated list of possible values for the resource properties Maximum length of 2048 characters
It represents the protocol used for authorization in snmp version 3. Possible values are: Fields
nullFields
Represents snmp version that will be used for discovering network. Possible values are: Fields
The attributes of a daily schedule. May only be used in a daily schedule.Fields
- item-id => integer, optional
ID of the daily item within the schedule. Ignored for dfm-schedule-create/modify and Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
The attributes of an hourly schedule. May only be used in a daily schedule.Fields
- item-id => integer, optional
ID of the hourly item within the schedule. Ignored for dfm-schedule-create/modify and Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
- start-hour => integer
Start hour of the hourly schedule. Range: [0..23]
- start-minute => integer
Start minute of the hourly schedule. Range: [0..59]
The attributes of a monthly schedule. May only be used in a monthly schedule.Fields
- day-of-month => integer, optional
If day-of-month is 29, 30, or 31, it will be interpretted as the last day of the month for months with fewer than that many days. if day-of-month is set, then both week-of-month and day-of-week must not be set. Range: [1..31]
- day-of-week => integer, optional
Day of week for the schedule. If day-of-week is set, then week-of-month must also be set and day-of-month must not be set. Range: [0..6] (0 = "Sun")
- item-id => integer, optional
ID of the monthly item within the schedule. Ignored for dfm-schedule-create/modify and Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
- start-hour => integer
Start hour of the monthly schedule. Range: [0..23]
- start-minute => integer
Start minute of the monthly schedule. Range: [0..59]
- week-of-month => integer, optional
A value of 5 indicates the last week of the month. If week-of-month is set, then day-of-week must also be set and day-of-month must not be set. Range: [1..5]
The attributes of a monthly subschedule. May only be used in a monthly schedule.Fields
- subschedule-name => obj-name, optional
Name of the subschedule to be used Ignored for dfm-schedule-create/modify. Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
Description of a DFM object using the schedule.Fields
- assignee-type => string
Type of DFM object. Possible values are: 'dp_policy', 'dp_schedule', 'dfm_schedule' and 'report_schedule'.
Detailed schedule contents. A schedule can be a daily, weekly, or monthly schedule. A daily schedule may include both multiple recurring hourly schedules (hourly-list element) and/or individual non-recurring daily schedules (daily-list element). A weekly schedule may include multiple references to daily schedules to be run on specific days of the week (weekly-subschedule-list element) and/or individual non-recurring weekly schedules (weekly-list element). A monthly schedule may include multiple non-recurring monthly schedules (monthly-list element). In addition, a monthly schedule may include a single reference to either a daily schedule or a montly schedule, not both. User may only specify one monthly-subschedule-info element in the monthly-subschedule-list.Fields
- is-modifiable => boolean, optional
If false, this schedule is one of the sample schedules that is created at installation time, therefore it may not be modified, renamed or destroyed. If true, it is not one of the sample schedules, therefore it may be modified, renamed or destroyed. is-modifiable always appears in the output. It is not possible to use it as input.
- schedule-category => string, optional
Specifies the category of the schedule. Possible values are: 'dfm_schedule', 'dp_schedule'. The default value is 'dp_schedule'.
- schedule-description => string, optional
Description of the schedule. It may contain from 0 to 255 characters. The description always appears in the output. If the description is omitted, then the default value is the empty string "".
- schedule-type => string
Type of schedule. Possible values are: 'daily', 'weekly', 'monthly'. Note that the type cannot be changed once the schedule is created. User has to delete the schedule before creating a new schedule using the same name with different type
The attributes of a id listFields
- id => obj-id
ID of the schedule. Range: [1..(2^31)-1]
- name => obj-name
Name of the schedule. May not be numeric.
- type => string
Type of schedule. Possible values are: daily, weekly, monthly
The attributes of a weekly schedule. May only be used in a weekly schedule.Fields
- item-id => integer, optional
ID of the weekly item within the schedule Ignored for dfm-schedule-create/modify and Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
- start-hour => integer
Start hour of the weekly schedule. Range: [0..23]
- start-minute => integer
Start minute of the weekly schedule. Range: [0..59]
The attributes of a weekly subschedule. May only be used in a weekly schedule.Fields
- end-day-of-week => integer
End day to be applied to the schedule being used. Range: [0..6] (0 = "Sun")
- item-id => integer, optional
ID of the use type within the schedule. Ignored for dfm-schedule-create/modify and Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
- start-day-of-week => integer
Start day to be applied to the schedule being used. Range: [0..6] (0 = "Sun")
- subschedule-id => obj-id
ID of the subschedule to be used. Range: [1..(2^31)-1]
- subschedule-name => obj-name, optional
Name of the subschedule to be used Ignored for dfm-schedule-create/modify. Required for all other calls. Will be presented in output. Range: [1..(2^31)-1]
Information about a disk.Fields
- aggregate-id => obj-id, optional
Identifier of aggregate to which the disk belongs. When aggregate the disk belongs to is not known or disk is a spare disk, aggregate-id will not be returned.
- aggregate-name => obj-full-name, optional
Name of aggregate to which the disk belongs. When aggregate the disk belongs to is not known or disk is a spare disk, aggregate-name will not be returned. The name is any simple name such as myaggr.
- disk-name => string
Name of the disk. Always present in the output. The name will look like "data disk 0b.18", "parity disk 0b.17", "dparity disk 0b.16" etc. Maximum length of 64 characters.
- disk-uid-or-wwn => string, optional
Identifier of the disk. For hosts running Data ONTAP versions prior to 7.0.1, this will be the World Wide Name (WWN) of the disk. For hosts running Data ONTAP versions 7.0.1 and later, this will be the Unique Identifier (UID) of the disk. When UID or WWN of a disk is not known, this field will not be returned. Maximum length of 90 characters. Format of disk WWN will look like: 20:00:00:0c:50:45:7d:bc Format of disk UUID will look like: 2000000C:50A9022F:00000000:00000000:00000000:00000000: 00000000:00000000:00000000:00000000
- host-id => obj-id
Identifier of host to which the disk belongs. Always present in the output.
- host-name => obj-name
Name of host to which the disk belongs. Always present in the output. The name is any simple name such as myhost.
- plex-name => string, optional
Name of plex to which the disk belongs. The name is any simple name such as plex0. When plex the disk belongs to is not known or disk is a spare disk, plex-name will not be returned. Maximum length of 64 characters.
- raidgroup-name => string, optional
Name of raidgroup to which the disk belongs. The name is any simple name such as rg0. When raidgroup the disk belongs to is not known or disk is a spare disk, raidgroup-name will not be returned. Maximum length of 64 characters.
Contains all the information about the application objects in the backup.Fields
- application-resource-namespace => application-resource-namespace
Name of the application resource namespace. The application resources contained in this backup, belong to this namespace.
- host-service-id => obj-id
Object ID of the host service, where the backup resides.
Describes the application object type, for a resource. This string is the type id of the application object that comes from the plug-in. It must contain between 1 and 255 characters.
Fields
An edge in the backup graph. These are used to traverse from a resource to another resource in the backup graph, to extract more specific information.Fields
Contains the information about the various resources and edges to and from these resources, for a backup. The application objects, are stored in the resource-graph, in form of resources and edges connecting these resources.Fields
Resource is an application object.Fields
- is-restorable => boolean
Indicates if the restore be done at the level of this object. Object may have been backed up fully but still the restore may not be possible at this object level. You may have to restore one or more of its children. This value may be different for replica of the primary backup if the backup only got partially transferred to the secondary.
- object-id => obj-id
DFM Object ID of the resource.
- object-name => obj-name
The application object name might have changed. This string represents the current name of the resource, which might be different from the "object-name-when-backed-up".
location information for backup. Only the members which are found are returned.Fields
- backup-id => integer, optional
Actual backup instance used to restore Range: [1..2^31-1]
- member-id => obj-id
ID of the qtree or OSSV directory. member-id is for the primary qtree or OSSV dir.
- member-name => obj-name
Name of the qtree or OSSV directory. member-name is for the primary qtree or OSSV dir.
- path => string, optional
Name of the path. The maximum path length is 32767 characters. This path is relative to 'member-id' path in the storage system or OSSV host. For storage system and ossv unix hosts this is a unix-like path (ex: '/dir1/dir2'). For ossv windows host this is a windows path (ex: '\dir1\dir2').
- path-found => boolean
TRUE if the path is found in a backup-version. The following elements will be set only if path-found is TRUE.
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data originated.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data originated. Currently, this is the Snapshot copy's creation time.
- volume-id => obj-id, optional
Identifier of the volume on which the snapshot resides.
Indicates the mount information of the backup.Fields
- host-id => obj-id
Identifier of host on which the backup is mounted.
- host-name => obj-name
Name of the host on which the backup is mounted.
- mount-id => integer
Identifier of the mount session that mounted the backup. Range: [1..2^31-1]
- mount-job-id => integer, optional
Identifier of the job that is mounting or has mounted the backup. The job id will not be available if the mount state has changed to "mounting" but the job has not started yet. The job id will be available for "mounted" backups. Range: [1..2^31-1]
- unmount-job-id => integer, optional
Identifier of the job that is unmounting the backup. This element will be present only if the backup is in the 'unmounting' state. Range: [1..2^31-1]
Named field in the metadata.Fields
- field-name => string
Name of the metadata field. Field names are up to 255 characters in length and are case- insensitive.
- field-value => string
Arbitrary, user-defined data expressed as a string. The string is opaque to the server and must not exceed 16384 (16k) characters in length.
Information about contents of a backup of dataset.Fields
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data originated.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data originated. Currently, this is the Snapshot copy's creation time. This can be used to differentiate between versions of the same file in the same backup.
- root-object-id => obj-id, optional
Id of a dataset member that is at the root of the file tree. This may be qtree, ossv directory or volume that is source of the physical data protection relationship.
Backup-version instance information including id and node name for this version.Fields
- backup-id => integer
Identifier of the backup instance. The management station assigns a unique id to each backup instance. Range: [1..2^31-1]
- backup-metadata => dfm-metadata[], optional
Opaque metadata for the backup. Metadata is usually set and interpreted by an application that is using the dataset. DFM does not look into the contents of the metadata.
- job-id => integer
Identifier for the job creating this backup.
- node-name => string, optional
Name of policy node that corresponds to the storage set that holds backup. May not be present if a different policy has been associated with this dataset, and the storage set containing this backup has no corresponding node in the current policy. It may be possible to restore contents from such orphaned backups if the physical relationships continue to exist.
Backup is a single backed up image of a dataset. If backup is too large to fit on a single volume, the management station uses multiple volumes in the same storageset. In such cases, a single backup may span multiple volumes in a single storageset. The management station keeps track of actual volumes that hold the backup. Caller can identify backup by its numeric identifier.Fields
- backup-description => string, optional
User specified description of backup for unscheduled backup started by dp-backup-start. The maximum length of this string is 255 characters.
- backup-id => integer
Identifier of the backup instance. The management station assigns a unique id to each backup instance. Range: [1..2^31-1]
- backup-metadata => dfm-metadata[], optional
Opaque metadata for the backup. Metadata is usually set and interpreted by an application that is using the dataset. DFM does not look into the contents of the metadata.
- backup-transfer-status => string, optional
This field has been deprecated, use backup-transfer-infos instead. Indicates the transfer status of this backup. This field is meant for internal use. It is set by backup transfer job only for backups on the Primary node of an application dataset. Possible values: 'transferring', 'transferred', 'transfer_failed', and ''.
- backup-version => dp-timestamp
Timestamp when the backup was taken. Backups of same dataset at different locations have same version if their contents are identical. The management station keeps track of which backups have identical contents and assigns same version to them. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- dataset-name => obj-name, optional
Name of dataset that the backup was for. Always present in the output, but ignored in the input.
- is-adding-members => boolean
Indicates whether more members are being added to the version-members element of this backup. If this element is true, Protection Manager expects more members to be added to the version-members element of this backup. The job that is transferring this backup, periodically checks to see if the new members have been added and starts transferring them. The transfer job does not exit until this input is set to false by calling dp-backup-version-modify API (or until the timeout occurs).
If this element is false, the job exits after transferring any members that it could find in this backup. Any new members that got added to the backup will be transferred by the next job.
This element can be used when creating a local backup potentially takes a very long time and you want the Protection Manager to start the transfers without waiting for the local backup to complete.
- is-backup-destination-node => boolean, optional
This will be true if the node has an incoming backup connection, false otherwise. This value may not be present if policy has been modified in such a way that storageset has no corresponding node in policy. If the node is the primary node of the dataset, this value will be false.
- is-for-propagation => boolean, optional
Indicates whether or not this backup version should be propagated according to the data protection policy. If false, the backup version will not be propagated to other nodes. Default value is true.
- is-mirror-destination-node => boolean, optional
This will be true if the node has an incoming mirror connection, false otherwise. This element may not be present if policy has been modified in such a way that storageset has no corresponding node in policy. If the node is the primary node of the dataset, this value will be false.
- node-id => integer, optional
Id of policy node that corresponds to the storageset that holds backup. May not be present if policy has been modified in such a way that storageset has no corresponding node in policy. It may be possible to restore contents from such orphaned backup if the physical relationships continue to exist. The node-id values start at 1. The node id of the primary node is always 1.
- node-name => string, optional
Name of policy node that corresponds to the storageset that holds backup. May not be present if policy has been modified in such a way that storageset has no corresponding node in policy. It may be possible to restore contents from such orphaned backup if the physical relationships continue to exist.
- retention-duration => integer, optional
The age, in seconds, after which this backup expires. This value is relative to the backup version timestamp. If retention-duration is not present, the backup does not expire. Range: [0..2^31 - 1].
- retention-type => dp-backup-retention-type
Type of retention for the backup.
Information about backup pathFields
- member-id => obj-id, optional
The qtree or OSSV dir ID. Either member-id or member-name must be supplied. If member-id is supplied member-name should not be supplied.
- member-name => obj-name, optional
The qtree or OSSV dir name. Either member-id or member-name must be supplied. member-name is only used if member-id is not supplied
- path => string, optional
Name of the path. The maximum path length is 32767 characters. This path is relative to 'member-id' path in the storage system or OSSV host. For storage system and ossv unix hosts this is a unix-like path (ex: '/dir1/dir2'). For ossv windows host this is a windows path (ex: '\dir1\dir2').
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data originated. This is ignored if primary-snapshot-unique-id is specified.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data originated. Currently, this is the Snapshot copy's creation time. This should be specified when member exists in more than one Snapshot copies in the same backup.
Retention type to which the backup should be archived. Possible values are: 'hourly', 'daily', 'weekly', 'monthly' and 'unlimited'.Fields
Indicates the transfer status of the backup between the two nodes directly connected to each other. When present in the input, you must specify at least one of destination-storageset-id, destination-node-name or destination-node-id.Fields
- backup-transfer-status => string, optional
Indicates the transfer status of the backup to the destination node. Possible values: 'transferring', 'transferred', 'transfer_failed', and ''.
'transferring' indicates that a job is currently transferring the backup to the destination node.
'transferred' indicates that backup was successfully transferred to the destination node.
'' indicates that no transfer has started.
In the case of a mirror relationship, transfer_failed is not a terminal state. The transfer could be retried as part of a subsequent mirror update.
- connection-type => string, optional
Type of the policy connection between source and destination nodes. Possible values: 'backup', 'mirror'. Present in the output only. Ignored if present in the input. May not be present in the output if the policy connection could not be found.
- destination-backup-id => integer, optional
Identifier of backup on the destination node. Present in the output only. Ignored if specified in the input. May not be present in the output if the backup version does not exist on the destination node yet. Range: [1..2^31-1]
- destination-node-id => integer, optional
ID of destination node in the data protection policy that corresponds to the destination storage set. May not be present in the output if the destination storage set cannot be mapped to any node in the policy. Range: [1..2^31-1].
- destination-node-name => string, optional
Name of destination node in the data protection policy that corresponds to the destination storage set. May not be present in the output if the destination storage set cannot be mapped to any node in the policy.
- job-id => integer, optional
Identifier for the job that is transferring or has transferred this backup to the destination storage set.
- source-node-id => integer, optional
ID of source node in the data protection policy that corresponds to the source storage set. Present in the output only. Ignored if specified in the input. May not be present in the output if the source storage set cannot be mapped to any node in the policy. Range: [1..2^31-1].
- source-node-name => string, optional
Name of source node in the data protection policy that corresponds to the source storage set. Present in the output only. Ignored if specified in the input. May not be present in the output if the destination storage set cannot be mapped to any node in the policy.
Backup-version information including list of all instances of this version.Fields
- backup-description => string, optional
Description for the backup. It may be any arbitrary string meaningful to the agent spawning the backup. The maximum length of this string is 255 characters.
- backup-transfer-infos => dp-backup-transfer-info[]
Indicates the transfer status of this backup version between each pairs of the source and destination nodes which have direct connection in the data protection policy.
- backup-version => dp-timestamp
Timestamp when the backup was taken. Backups of same dataset at different locations have same version if their contents are identical. The management station keeps track of which backups have identical contents and assigns same version to them. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- retention-type => dp-backup-retention-type
Type of retention for the backup.
Information about file, directory, qtree, drive etc. on a primary host or in a backup.Fields
- child-count => integer
Count of children. This is zero when value of file-type element is 'file'. Otherwise, it can be zero or non-zero. Range: [0..2^31-1]
- file-type => string
Type of the file. Possible values: 'file', 'directory', 'qtree', 'volume', 'drive', 'fifo', 'cspec', 'bspec', 'symlink', 'socket', 'registry', 'stream', 'lun', and 'other'.
- is-empty => boolean, optional
This element is present only if the file-type is a directory and the version of ONTAP on the storage systems in 7.3.2 and higher. For all other cases, this element will be absent. It determines if the directory is is an empty directory. The value of the element is TRUE is directory is empty, it is FALSE otherwise. Directory is considered empty if it only contains entries for "." and ".."
Describes an unscheduled data protection job. Type of job is specified in dp-job-type element. Only one of all possible dp-*-job-data elements can be present.Fields
- dataset-id => obj-id
Id of the dataset being protected by the job.
- dp-job-type => string
Type of the dp job. Possible values: 'snapvault', 'snapmirror' and 'local_snapshot'.
Mount session state. Possible values are: 'mounting', 'mounted', 'unmounting'.Fields
Describes a property of an object.Fields
- property-name => string
The name of the property field. Property names are upto 255 characters in length and are case- sensitive.
- property-value => string
Value of the backup property. Property values are upto 255 characters in length and are case- sensitive.
Key to search the backups by. This key is matched against: - partial description of the backup,
- partial current name of a primary object in the backup,
- partial name of a primary application resources when the backup was taken,
- UUID of a primary application resources in the backup.
Fields
Information of one qtree or ossv dir that is in snapshot.Fields
- primary-id => obj-id, optional
Always returned on output, either primary-id or primary-name must be specified on input. Primary qtree or OSSV directory ID.
- primary-name => obj-name, optional
Always returned on output, either primary-id or primary-name must be specified on input. The name of a qtree or OSSV directory whose data is in the Snapshot.
- primary-snapshot-host-service-id => string, optional
Id assigned to the Snapshot copy on the primary by the Host Service. This is filled in only for the backups containing resources from the Host Service in which case it is set to the ObjectId of the Snapshot copy as reported by the Host Service. Ignored in the input.
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data originated.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data originated. Currently, this is the Snapshot copy's creation time.
- secondary-qtree-id => obj-id, optional
Corresponding secondary qtree id. This legacy parameter if supplied along with secondary-qtree-name-or-id will be ignored unless it references a different qtree in which case an error will be returned. Not included if the backup version consists of local snapshots.
- secondary-qtree-name => string, optional
Corresponding secondary qtree name. This is ignored on input. Use secondary-qtree-name-or-id to specify by name. On output this will be returned along with secondary-qtree-id. Not included if the backup version consists of local snapshots.
- secondary-qtree-name-or-id => string, optional
Corresponding secondary qtree name or identifier. This is used to lookup the secondary-qtree-id if present. If this is present, secondary-qtree-id will be ignored. Not included if the backup version consists of local snapshots.
Describes one snapshot member of a backup version.Fields
- host-fqdn => string, optional
The fully qualified domain name of the host to which the volume belongs to. Always present in the output. Ignored in the input. Length: [1..255]
- host-id => obj-id, optional
This is the identifier of the host to which the volume belongs to. Always present in the output. Ignored in the input.
- is-available => boolean, optional
Whether this snapshot is available for a restore. A snapshot is considered available if the containing filer is responding to ping requests and the containing volume is online. These values will be based on information returned by the most recent DFM pass, not by querying the filer in real-time.
- volume-id => obj-id, optional
Always included in output, either volume-id or volume-name must be specified in input. ID of the volume corresponding to member snapshot.
- volume-name => obj-name, optional
Always included in output, either volume-id or volume-name must be specified in input. The name of the volume this Snapshot resides on.
Information about an application resource's namespace, type and count.Fields
- application-resource-namespace => application-resource-namespace
Namespace of the application resource.
- count => integer
Number of application resources.
Range: [0..2^31-1]
Information about a single dataset.Fields
- dataset-id => obj-id
ID of the dataset.
- dataset-name => obj-name
Name of the dataset.
- lag-status => obj-status
Lag status of the relationship lag versus the DP policy lag threshold settings. Possible values are "unknown", "normal", "warning" and "error".
- worst-lag => integer
Worst lag time of all the relationships in the datasets. The lag time is the number of seconds since the completion of the last successful transfer to the destination for each relationship. Range: [0..2^63-1]
Information about a single SnapVault or SnapMirror relationship.Fields
- lag => integer
Seconds since the completion of the last successful transfer to the destination. If no transfer has ever succeeded, this value is 0. Range: [0..2^63-1]
- relationship-id => obj-id
Identifier of the relationship.
Number of DR-enabled datasets with a single distinct dr-state/dr-status combination.Fields
- count => integer
Number of DR-enabled datasets with a state of dr-state and status of dr-status. Only return state and status combinations which have at least one dataset in them. Range: [1..2^31-1]
- dr-state => dr-state
DR state of all datasets
Information about the user and the permission with which the user can access the CIFS share.Fields
- access-rights => string
Access right of the user accessing the CIFS share. Possible values are 'no_access', 'read', 'change' and 'full_control'
- user-name => string
Name of the user accessing the CIFS share. The username may be the name of an actual domain user or one of the built-in names like 'administrator' or 'everyone'. This can be a maximum of 128 characters.
Details specific to the 'destroy_member' request.Fields
- member-id => obj-id
This is the id of the dataset member on which the action was performed.
- member-type => string
Type of the dataset member on which the action was or is to be performed. Possible values are 'volume', 'qtree' and 'lun'.
A job is used to run one or more Data Protection or provisioning operations.Fields
- bytes-transferred => integer
Bytes transferred in this operation. Zero when transfer starts. Range: [0..2^63-1]. It will be always zero for a provisioning or configuration job.
- completed-timestamp => dp-timestamp, optional
Timestamp when the job was completed. If unspecified, the job has not yet reached completion.
- dataset-id => integer, optional
Database ID of the dataset on which this job was carried out. This field is optional and will be returned only when the job is carried out on a dataset. This field is deprecated in favor of object-name and object-type. Range: [1..2^31-1].
- dataset-name => string, optional
Name of the dataset on which this job was carried out. This field is optional and will be returned only when the job is carried out on a dataset. This field is deprecated in favor of object-name and object-type.
- destination-node-id => integer, optional
Destination policy node id associated with this job. Not present if this job is only associated with a single node (e.g. a snapshot job or a provisioning job). Range: [1..2^31-1]
- destination-node-name => string, optional
Destination policy node name associated with this job. Not present if this job is only associated with a single node (e.g. a snapshot job or provisioning job).
- job-context-application-resource-namespaces => application-resource-namespace[], optional
List of namespaces, from which the application resources have been added to the job context dataset. Not returned, if job-context-application-resources-only is false, in dp-job-list-iter-start API, for this iteration.
- job-id => integer
Identifier of the job. Range: [1..2^31-1].
- object-id => obj-id
Identifier of the object on which this job was carried out.
- object-name => obj-full-name
Full name of the object on which this job was carried out.
- object-type => string
Type of the object on which this job was carried out. Possible values: - dataset
- mgmt_station
- vfiler
- volume
- policy-id => integer, optional
Database ID of the data protection policy or the application policy responsible for spawning this job. If unspecified, the job was not invoked in response to a policy. Range: [1..2^31-1].
- policy-name => string, optional
Name of the data protection policy or the application policy responsible for spawning this job. If unspecified, the job was not invoked in response to a policy.
- source-node-id => integer, optional
Source policy node id associated with this job. Range: [1..2^31-1]
- source-node-name => string, optional
Source policy node name associated with this job.
- submitted-by => string, optional
User who submitted the job. If unspecified, the job was submitted by one of the internal DFM services such as the scheduler. The length of this string cannot be more than 255 characters.
The overall status of the data protection/provisioning job. The possible values are "queued", "canceling", "canceled", "running", "running_with_failures", "partially_failed", "succeeded", and "failed". This field is derived from dp-job-state and dp-job-progress.
The current mappings are as follow: dp-job-state dp-job-progress dp-job-overall-status
=====================================================
queued * queued
running in_progress running
running success running
running partial_success running_with_failures
running failure running_with_failures
aborting * canceling
aborted * canceled
completed partial_success partially_failed
completed failure failed
completed success succeeded
completed in_progress canceled
Fields
The progress of data protection job. Valid values are "in_progress", "partial_success", "success" and "failure".
"in_progress" - No transfers have completed till now.
"success" - Atleast one transfer succeeded, none failed till now
"failure" - Atleast one transfer failed, none succeeded till now.
"partial_success" - Atleast one transfer failed and atleast one transfer succeeded till now.
The progress of the data protection job follows the following state diagram, depending on the success and failure of the transfers handled by the data protection job. +------"in_progress"---+
| |
| |
failure? success?
| |
v v
"failure" "success"
| |
| |
success? failure?
| |
+-->"partial_success"<-+
Fields
One historical progress event for a data protection or provisioning job.Fields
- event-type => string
Type of event. Values are: - 'job-start'
- 'job-progress'
- 'job-abort' (dp-progress-job-abort-info)
- 'job-end'
- 'job-retry'
- 'rel-create-start' (dp-progress-relationship-info)
- 'rel-create-progress' (dp-progress-relationship-info)
- 'rel-create-end' (dp-progress-relationship-info)
- 'rel-destroy-start' (dp-progress-relationship-info)
- 'rel-destroy-progress' (dp-progress-relationship-info)
- 'rel-destroy-end' (dp-progress-relationship-info)
- 'snapshot-create' (dp-progress-snapshot-info)
- 'snapshot-delete' (dp-progress-snapshot-info)
- 'backup-create' (dp-progress-backup-info)
- 'backup-delete' (dp-progress-backup-info)
- 'snapvault-start' (dp-progress-snapvault-info)
- 'snapvault-progress' (dp-progress-snapvault-info)
- 'snapvault-end' (dp-progress-snapvault-info)
- 'snapmirror-start' (dp-progress-snapmirror-info)
- 'snapmirror-progress' (dp-progress-snapmirror-info)
- 'snapmirror-end' (dp-progress-snapmirror-info)
- 'restore-start' (dp-progress-restore-info)
- 'restore-progress' (dp-progress-restore-info)
- 'restore-end' (dp-progress-restore-info)
- 'migrate-start' (dp-progress-restore-info)
- 'migrate-progress' (dp-progress-restore-info)
- 'migrate-end' (dp-progress-restore-info)
- 'mirror-break-script-start' (dp-progress-failover-info)
- 'mirror-break-script-end' (dp-progress-failover-info)
- 'mirror-break-quiesce-start' (dp-progress-failover-info)
- 'mirror-break-quiesce-end' (dp-progress-failover-info)
- 'mirror-break-start' (dp-progress-failover-info)
- 'mirror-break-end' (dp-progress-failover-info)
- 'volume-create' (progress-volume-info)
- 'volume-option-set' (progress-volume-option-info)
- 'snapshot-reserve-resize' (progress-volume-info)
- 'volume-autosize' (progress-volume-info)
- 'snapshot-autodelete' (progress-volume-info)
- 'lun-create' (progress-lun-info)
- 'lun-destroy' (progress-lun-info)
- 'lun-map' (progress-lun-map-info)
- 'lun-unmap' (progress-lun-map-info)
- 'igroup-create' (progress-igroup-info)
- 'igroup-destroy' (progress-igroup-info)
- 'igroup-add' (progress-igroup-config-info)
- 'igroup-remove' (progress-igroup-config-info)
- 'qtree-create' (progress-qtree-info)
- 'quota-set' (progress-quota-info)
- 'nfsexport-create' (progress-nfsexport-info)
- 'cifs-share-create' (progress-cifs-share-info)
- 'cifs-share-modify' (progress-cifs-share-info)
- 'cifs-share-delete' (progress-cifs-share-info)
- 'volume-offline' (progress-volume-info)
- 'volume-destroy' (progress-volume-info)
- 'qtree-destroy' (progress-qtree-info)
- 'vfiler-storage-add' (progress-vfiler-storage-info)
- 'script-run' (progress-script-run-info)
- 'vfiler-create' (progress-vfiler-info)
- 'vfiler-setup' (progress-vfiler-info)
- 'volume-dedupe' (progress-volume-dedupe-info)
- 'volume-dedupe-enable' (progress-volume-dedupe-info)
- 'volume-dedupe-disable' (progress-volume-dedupe-info)
- 'volume-dedupe-schedule-set' (progress-volume-dedupe-info)
- 'aggregate-space' (progress-aggregate-info)
- 'storage-system-discover' (progress-storage-system-info)
- 'resource-pool-create' (progress-resource-pool-info)
- 'resource-pool-members-add' (progress-resource-pool-info)
- 'storage-service-configure' (progress-storage-service-info)
- 'host-service-operation-start' (progress-host-service-info)
- 'host-service-operation-end' (progress-host-service-info)
- 'host-service-operation-abort' (progress-host-service-info)
- 'host-service-operation-progress' (progress-host-service-info)
Each event type has a corresponding element with event-specific fields broken out. The name of the event-specific element is listed after the event type. Only the name of the event is returned. The value in the parenthesis is the type of element returned for that type of event.
- job-id => integer
The ID of the job that generated this event. Range: [1..2^31-1].
- job-type => dp-job-type
Type of data protection or provisioning job.
The state of the data protection/provisioning job. The possible values are "queued", "running", "completed", "aborting", "aborted". This state is derived from completed-timestamp, abort-requested-timestamp and started-timestamp.Fields
The type of data protection, provisioning, migration, space management, configuration and Host Service related jobs. Valid values for data protection jobs are "local_backup", "local_then_remote_backup", "remote_backup", "mirror", "on_demand_backup", "local_backup_confirmation", "restore", "create_relationship", "destroy_relationship", "failover", "backup_mount" and "backup_unmount".
Valid values for provisioning job types are "provision_member", "resize_member", "destroy_member", "delete_snapshots", "dedupe_member", "undedupe_member", "dedupe_volume" and "pm_re_export".
Valid space management jobs are: "dedupe_volume", "migrate_volume", "resize_volume", "delete_snapshot" and "delete_backup". "resize_lun" and "destroy_lun".
Valid migration job types are: "migrate_start", "migrate_complete", "migrate_cancel", "migrate_cleanup", "migrate_update".
Valid configuration job type is: "configuration".
Valid Host Service related jobs are: "hs_configure", "hs_discover", "hs_import_storage_config".
Fields
Data for local-snapshot job. If this element is present, job is responsible for taking a local snapshot of data on the storageset associated with the specified node.Fields
- is-confirmation-data => boolean, optional
If the element: "is-confirmation-data" is set to 'true', this job only confirms backups have been created on the primary node. An external agent, such as a SnapManager/SnapDrive, is responsible for creating these backups. This element can only be true for application data sets. If set to true, the job type will be "local_backup_confirmation" and the job will update the protection status without creating a local backup.
- node-id => integer
Id of the node that needs local snapshot. Range: [1..2^31-1]
- retention-type => dp-backup-retention-type
Retention to be used for the local snapshot.
Information about the creation or deletion of a backup version.Fields
- backup-id => integer
ID of the new backup instance. Each copy of a backup version gets a unique backup ID. Range: [1..2^31-1]
- backup-version => dp-timestamp
Time at which backup image was created. All backups with the same backup-version are copies of the same point-in-time image of the original data.
Information about a failover job.Fields
- destination-path => string, optional
Destination path for the SnapMirror break operation. This element is not present for event type 'mirror-break-script-start' and 'mirror-break-script-end'.
- relationship-id => obj-id, optional
The relationship ID for performing mirror break. This element is not present for event type 'mirror-break-script-start' and 'mirror-break-script-end'.
- script-path => string, optional
Path of the failover script. This element is present only for event type 'mirror-break-script-start' and 'mirror-break-script-end'.
- script-run-as => string, optional
Username used to run the failover script. This element is present only for event type 'mirror-break-script-start' and 'mirror-break-script-end'.
Information about the abort of a job.Fields
Information about the creation or destruction of a SnapVault or SnapMirror relationship.Fields
- bytes-transferred => integer
Bytes transferred as a result of creating a relationship. Zero when for relationship destroy messages. Range: [0..2^63-1].
Information about restore of a path.Fields
- backup-id => integer
ID of the backup instance used for restore. Range: [1..2^31-1]
- backup-version => dp-timestamp
The backup version used for restore.
- destination-path => string, optional
The full path to restore the dataset member to. The format is :/
Information about the start and end of a SnapMirror transfer.Fields
- bytes-transferred => integer
Bytes transferred in this operation. Zero when transfer starts. Range: [0..2^63-1].
- destination-volume-or-qtree-id => integer
Numeric ID of destination volume or qtree of the mirror. Range: [1..2^31-1]
- destination-volume-or-qtree-name => string
Full name of destination volume or qtree of the mirror. The format is :/.
- source-volume-or-qtree-id => integer
Numeric ID of volume or qtree being mirrored. Range: [1..2^31-1]
- source-volume-or-qtree-name => string
Name of volume or qtree being mirrored. The format is :/.
Information about the creation or deletion of a snapshot.Fields
- snapshot-name => string
Name of snapshot
Information about the start and end of a SnapVault transfer.Fields
- bytes-transferred => integer
Bytes transferred in this operation. Zero when transfer starts. Range: [0..2^63-1].
Describes a scheduled data protection job. Each job has a scheduled start time and protects a single dataset. Scheduler service is responsible for starting job at its scheduled time. Type of job is inferred from which dp-*-job-data element is present. Only one of all possible dp-*-job-data elements can be present. Scheduled job does not have a job id. When scheduler service is about to start the job, it creates a job record in the persistent database and at that time job gets its id.Fields
- scheduled-time => integer
Time when job is scheduled to start. Value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. Sometimes, the start time of the job can be delayed and the start time in that case will reflect the delay. The scheduled-time will always indicate the time when the job was scheduled to start.
- start-time => integer
Time when job needs to start. Value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. Scheduler service is responsible for starting the job at this time.
Data for snapmirror-transfer job. If this element is present, job is responsible for transferring data between two storagesets associated with the specified connection using SnapMirror protocol.Fields
- retention-type => dp-backup-retention-type
Retention to be used for the pre/post backup script in the snapmirror-transfer job only.
Data for snapvault-transfer job. If this element is present, job is responsible for transferring data between two storagesets associated with the specified connection using SnapVault protocol.Fields
- connection-id => integer
Id of the policy connection that needs SnapVault transfer. Range: [1..2^31-1]
- retention-type => dp-backup-retention-type
Retention to be used for the backup created by snapvault-transfer job.
Seconds since 1/1/1970 in UTC. Range: [0..2^31-1]. This runs out in 2036, so update the API some time before then.Fields
Information about an environment variable.Fields
- name => string
Environment variable name.
- value => string
Value of an environment variable.
Job type.Fields
Information about the aggregateFields
- aggregate-id => obj-id
Database identifier of the aggregate
Information about the CIFS share created as part of provisioning operation.Fields
Information about the storage service.Fields
- host-service-id => obj-id
Database identifier of the host service.
Information about initiators being added or removed to/from an initiator group of a job.Fields
- igroup-id => obj-id
Database identifier of the initiator group to which initiators are added or destroyed in a provisioning job.
- initiators => string
Comma separated list of initiators that are added or removed in this provisioning job.
Information about the initiator group of a job.Fields
- igroup-id => obj-id
Database identifier of the initiator group which is created or destroyed in the provisioning job.
- igroup-name => obj-full-name
Full name of the initiator group which is created or destroyed in the provisioning job.
Information about the lun being provisioned or destroyed in a provisioning job.Fields
Information about the mapping of a LUN.Fields
- igroup-id => obj-id
Database identifier of the initiator group to which the LUN is mapped.
- igroup-name => obj-full-name
Full name of the initiator group to which the LUN is mapped.
- lun-name => obj-full-name
Full name of the LUN that is mapped.
- lun-path-id => obj-id
Database identifier of the LUN that is mapped.
Information about the NFS export created as part of provisioning operation.Fields
- path => string
Name of the NFS export by which it can be accessed.
Information about a qtree which is created or destroyed in the provisioning job.Fields
- qtree-id => obj-id
Database identifier of the qtree provisioned or destroyed.
- qtree-name => obj-full-name
Full name of the qtree provisioned or destroyed.
Information about the quota of a qtree.Fields
- qtree-id => obj-id
Database identifier of the qtree on which the quotas are set.
- qtree-name => obj-full-name
Full name of the qtree on which the quotas are set.
Information about the resource pool and its members.Fields
Information about the post provisioning script which is run.Fields
Information about the storage service.Fields
- storage-service-id => obj-id
Database identifier of the storage service.
Information about the discovered storage system.Fields
Information about creating a vfilerFields
- filer-id => obj-id
Database identifier of the storage system.
- filer-name => obj-full-name
Full name of the storage system on which the vfiler is being created.
- vfiler-id => obj-id, optional
Database identifier of the vFiler unit
Information about adding storage to a vFiler.Fields
- vfiler-name => obj-full-name
Full name of the vFiler to which storage is added.
- volume-id => obj-id
Database identifier of the volume which is added to the vFiler.
- volume-name => obj-full-name
Full name of the volume which is added to the vFiler.
Information about the deduplication operation on the volume.Fields
- dedupe-progress => string, optional
The progress of the current deduplication operation on the volume with information as to which stage of deduplication is currently in progress and how much data is processed for that stage. For eg: "25 MB Scanned, 20MB Searched, 40MB (20%) Done , 30MB Verified".
- volume-id => obj-id
Database identifier of the volume on which the deduplication is run.
- volume-name => obj-full-name
Full name of the volume on which the deduplication operation is run.
Information about the volume of on which a provisioning operation is executed.Fields
- volume-id => obj-id
Database identifier of the volume on which the provisioning operation is executed.
- volume-name => obj-full-name
Full name of the volume on which the provisioning operation is executed.
Information about the volume options set.Fields
- volume-id => obj-id
Database identifier of the volume on which the provisioning operation is executed.
- volume-name => obj-full-name
Full name of the volume on which the provisioning operation is executed.
Information of a single provisioning requestFields
- dedupe-member-request-info => dedupe-member-request-info, optional
Information specific to the 'dedupe_member' request. Returned only when job-type is 'dedupe_member'.
- delete-snapshots-request-info => delete-snapshots-request-info, optional
Information specific to the snapshot deletion request. Returned only when the job-type is 'delete_snapshots'.
- provision-member-request-info => provision-member-request-info, optional
Information specific to the 'provision_member' request. Returned only when job-type is 'provision_member'.
- resize-member-request-info => resize-member-request-info, optional
Information specific to the resize related provisioning request. Returned only when job-type is 'resize_member'.
- undedupe-member-request-info => undedupe-member-request-info, optional
Information specific to the 'undedupe_member' request. Returned only when job-type is 'undedupe_member'.
Information about the member of the resource pool.Fields
Information about option and its value.Fields
The name and id of a dataset.Fields
- dataset-id => obj-id
Identifier for the dataset.
Range: [1..2^31-1]
- dataset-name => obj-name
Name of the dataset.
Information about one directory.Fields
- dataset-id => integer, optional
If is-in-dataset is true, then this is the ID number of the dataset to which the directory belongs. This field is deprecated in favor of datasets, which lists all datasets the directory belongs to. It is still populated with one of the dataset ids.
- datasets => dataset-reference[]
A list of the names and identifiers of the datasets this directory belongs to. If is-in-dataset is false, this list will be empty.
- host-name => string
Name of the host that contains this directory.
- is-available => boolean, optional
True if this directory and it's parent host are up and online. Since OSSV hosts can only be contacted with valid credentials, if the login credentials are unset or invalid, the is-available status will be false. Only output if the call to iter-start included the "include-is-available" flag.
- is-browsable => boolean
Set to true if the directory path can be used as an input to dp-ossv-directory-browse-iter-start. If set to false, this path is guaranteed not to have browsable child paths.
- is-dp-ignored => boolean
Indicates that the directory is marked as ignored by Freya/DFM for the purposes of data protection.
- is-in-dataset => boolean
Indicates if the directory is a member of any dataset.
- path => string
Path of the directory.
Name of one subdirectory. The name is relative to the parent directory ("directory-path" in the dp-ossv-directory-browse-iter-start API). If the parent directory was empty, then this will be an absolute path. The name will always end with a slash character appropriate to the host operating system of the OSSV agent, unless the path is a special directory which has no children.Fields
Information about one filesystem root.Fields
- host-id => integer
ID of the host that contains this root.
Range: [1..2^31-1]
- host-name => string
Name of the host that contains this root.
Application related information.Fields
- is-leaf => boolean
If true this path cannot be browsed further. If false the path can be specified as input to the dp-ossv-application-list-info-iter-start zapi.
- name => string
The display name for the application component.
- path => string
The application path to be used for further browsing components under this path.
OSSV host related information.Fields
- host-id => obj-id
The ID of the OSSV host.
- host-name => obj-name
The name of the OSSV host.
Name or ID of a host. IP addresses are also accepted.Fields
Information about a connection from one node to another in a DP policy. The connection's properties are represented by optional elements. The rules for when a property is present or absent are defined in the property's Description, and depend on the type of connection.Fields
- backup-schedule-id => obj-id, optional
An object ID for the backup schedule for this connection. The backup schedule specifies when to create backup snapshots and transfer them to backup secondaries. The value of backup-schedule-id may be 0, in which case no backup schedule is set for this connection, and backups will be created only when initiated by the user. If the value is not 0, it must be the ID of a DP schedule object.
If both backup-schedule-id and backup-schedule-name appear in the input to dp-policy-modify, then backup-schedule-id determines the ID of the backup schedule, and the value of backup-schedule-name is ignored. If neither backup-schedule-id nor backup-schedule-name appear in the input to dp-policy-modify, then the default value is an ID of 0 (equivalent to a name of ""), which means no backup schedule is set.
Both backup-schedule-id and backup-schedule-name always appear in the output from dp-policy-list-iter-next for backup connections.
This property is present only for backup connections.
- backup-schedule-name => obj-name, optional
An object name for the backup schedule for this connection. The backup schedule specifies when to create backup snapshots and transfer them to backup secondaries. The value of backup-schedule-name may be the empty string (""), in which case no backup schedule is set for this connection, and backups will be created only when initiated by the user. If the value is not "", it must be the name of a DP schedule object. If both backup-schedule-id and backup-schedule-name appear in the input to dp-policy-modify, then backup-schedule-id determines the ID of the backup schedule, and the value of backup-schedule-name is ignored. If neither backup-schedule-id nor backup-schedule-name appear in the input to dp-policy-modify, then the default value is a name of "" (equivalent to an ID of 0), which means no backup schedule is set.
Both backup-schedule-id and backup-schedule-name always appear in the output from dp-policy-list-iter-next for backup connections.
This property is present only for backup connections.
- from-node-id => integer
ID of node at source of the connection. This is assigned by the system, and therefore may not be modified. The node at the root of the graph is id 1. Range: [1..2^31 - 1].
- from-node-name => string
Name of node at source of the connection. It is returned in the policy list query with a maximum length of 255 characters.
- id => integer
ID of the connection. This is assigned by the system, and therefore may not be modified. Range: [1..2^31 - 1].
- is-dr-capable => boolean
Indicates whether this link is DR capable. This read only flag cannot be changed by dp-policy-modify but will be returned by the dp-policy-list-iter-next ZAPI.
- mirror-schedule-id => obj-id, optional
An object ID for the mirror schedule for this connection. The mirror schedule specifies when to transfer data via an asynchronous mirror. Synchronous and semi-synchronous mirrors are not supported. The value of mirror-schedule-id may be 0, in which case no mirror schedule is set for this connection. If the value is not 0, it must be the ID of a DP schedule object.
If no mirror schedule is set for this connection, then the mirror relationship is idle, which means mirroring occurs only when the user initiates it manually.
If both mirror-schedule-id and mirror-schedule-name appear in the input to dp-policy-modify, then mirror-schedule-id determines the ID of the mirror schedule, and the value of mirror-schedule-name is ignored. If neither mirror-schedule-id nor mirror-schedule-name appear in the input to dp-policy-modify, then the default value is an ID of 0 (equivalent to a name of ""), which means no mirror schedule is set.
Both mirror-schedule-id and mirror-schedule-name always appear in the output from dp-policy-list-iter-next for mirror connections.
This property is present only for mirror connections.
- mirror-schedule-name => obj-name, optional
An object name for the mirror schedule for this connection. The mirror schedule specifies when to transfer data via an asynchronous mirror. Synchronous and semi-synchronous mirrors are not supported. The value of mirror-schedule-name may be the empty string (""), in which case no mirror schedule is set for this connection. If the value is not "", it must be the name of a DP schedule object.
If no mirror schedule is set for this connection, then the mirror relationship is idle, which means mirroring occurs only when the user initiates it manually.
If both mirror-schedule-id and mirror-schedule-name appear in the input to dp-policy-modify, then mirror-schedule-id determines the ID of the mirror schedule, and the value of mirror-schedule-name is ignored. If neither mirror-schedule-id nor mirror-schedule-name appear in the input to dp-policy-modify, then the default value is a name of "" (equivalent to an ID of 0), which means no mirror schedule is set.
Both mirror-schedule-id and mirror-schedule-name always appear in the output from dp-policy-list-iter-next for mirror connections.
This property is present only for mirror connections.
- throttle-schedule-id => obj-id, optional
An object ID that specifies the throttle schedule for this connection. A throttle schedule specifies an upper limit, varying over time, on the network bandwidth used to make backups for a single dataset. This limit applies to the total amount of bandwidth that may be used at one time by all active transfers over protection relationships associated with this connection. The value of throttle-schedule-id may be 0, in which case no throttle is set for this connection, and there is no limit on data transfer bandwidth. If the value is not 0, it must be the ID of a DP throttle object.
If both throttle-schedule-id and throttle-schedule-name appear in the input to dp-policy-modify, then throttle-schedule-id determines the ID of the throttle, and the value of throttle-schedule-name is ignored. If neither throttle-schedule-id nor throttle-schedule-name appear in the input to dp-policy-modify, then the default value is an ID of 0 (equivalent to a name of ""), which means no throttle is set.
Both throttle-schedule-id and throttle-schedule-name always appear in the output from dp-policy-list-iter-next.
This property is present for both backup and mirror connections.
- throttle-schedule-name => obj-name, optional
An object name that specifies the throttle schedule for this connection. A throttle schedule specifies an upper limit, varying over time, on the network bandwidth used to make backups for a single dataset. This limit applies to the total amount of bandwidth that may be used at one time by all active transfers over protection relationships associated with this connection. The value of throttle-schedule-name may be the empty string (""), in which case no throttle is set for this connection, and there is no limit on data transfer bandwidth. If the value is not "", it must be the name of a DP throttle object.
If both throttle-schedule-id and throttle-schedule-name appear in the input to dp-policy-modify, then throttle-schedule-id determines the ID of the throttle, and the value of throttle-schedule-name is ignored. If neither throttle-schedule-id nor throttle-schedule-name appear in the input to dp-policy-modify, then the default value is a name of "" (equivalent to an ID of 0), which means no throttle is set.
Both throttle-schedule-id and throttle-schedule-name always appear in the output from dp-policy-list-iter-next.
This property is present for both backup and mirror connections.
- to-node-id => integer
ID of node at destination of the connection. This is assigned by the system, and therefore may not be modified. The node at the root of the graph is id 1. Range: [1..2^31 - 1].
- to-node-name => string
Name of node at destination of the connection. It is returned in the policy list query with a maximum length of 255 characters.
- type => string
Type of the connection. Allowed values are "backup" or "mirror". This element may not be modified.
All content of a single policy, including its name, description, topology (nodes and connections), and the properties for each node and connection.Fields
- description => string, optional
Description of the policy. It may contain from 0 to 255 characters. If the length of a description passed as input to dp-policy-modify is longer than 255 characters, then the ZAPI fails with error code EINVALIDPOLICYPROPERTY. The description always appears in the output from dp-policy-list-iter-next. If the description is omitted from the input to dp-policy-modify, then the default value is the empty string "".
- dp-policy-connections => dp-policy-connection-info[], optional
List of connections between nodes of the policy. In the output from dp-policy-list-iter-next, this list is sorted in order of increasing connection ID. In the input to dp-policy-modify, it need not be sorted. However, in input to dp-policy-modify, there must be a one-one mapping between entries in this array and connections in the policy graph, or else the call fails because it is attempting to change the policy's topology. The default value is an empty list. If a policy has only a single node and therefore no connections, you may omit this element from the input to dp-policy-modify. It is always present in the output from dp-policy-list-iter-next.
- dp-policy-nodes => dp-policy-node-info[]
List of nodes in the policy. In the output from dp-policy-list-iter-next, this list is sorted in order of increasing node ID. In the input to dp-policy-modify, it need not be sorted. However, in input to dp-policy-modify, there must be a one-one mapping between entries in this array and nodes in the policy graph, or else the call fails because it is attempting to change the policy's topology.
- name => obj-name
Name of the policy. Each DP policy has a name that is unique among DP policies, but may be the same as an object of some other type.
Contains all information about a single DP policy, including its content and metadata such as its ID.Fields
- id => obj-id
Object ID for the policy.
- is-modifiable => boolean
If false, this policy is one of the sample policies that is created at installation time, therefore it may not be deleted using dp-policy-destroy and may not be modified using dp-policy-modify. If true, it is not one of the sample policies, therefore it may be deleted or modified. This element cannot be modified.
- is-non-disruptive-restore-compatible => boolean
If false, this policy can not be assigned to a dataset with the requires-non-disruptive-restore attribute set. If set to true and applied to a data set with the requires-non-disruptive-restore attribute set to true, Protection Manager will configure the dataset such that Backup connections will support non-disruptive restores. This does not mean that all backup versions and restore requests will support non-disruptive restore — the caller must check the supports-non-disruptive-restore output element from the dp-backup-list iterator.
Information about a node in a DP policy. The node's properties are represented by optional elements. The rules for when a property is present or absent are defined in the property's Description, and depend on the type of node. There is a set of properties associated with nodes that determine how long the system retains backups. Each backup falls into one of these four retention classes: hourly, daily, weekly, or monthly. The following eight properties determine how long backups are retained for each of the retention classes:
- hourly-retention-count
- hourly-retention-duration
- daily-retention-count
- daily-retention-duration
- weekly-retention-count
- weekly-retention-duration
- monthly-retention-count
- monthly-retention-duration
A backup expires when its age, in seconds, exceeds the retention duration set for its retention class on the node on which it is stored. After a backup expires, whenever the number of newer backups in its retention class at least equals the retention count for its class, then the expired backup is deleted. All eight of the retention properties are present only on the root node and on backup secondary nodes. On the root node, the retention properties apply to local snapshots created on the root node. On backup secondary nodes, the retention properties apply to backups of primary node data that are stored on the backup secondary node. The retention properties are absent from mirror destination nodes because a mirror destination retains the same set of backups as its mirror source.
Fields
- backup-script-path => string, optional
Absolute path on the management station to the script to invoke both before and after backing up a backup primary node. The script is invoked when either a primary node is backed up to a secondary node or a local backup is created on the root node. The path consists of 0 to 255 characters. If the length of a backup-script-path passed as input to dp-policy-modify is longer than 255 characters, then the ZAPI fails with error code EINVALIDPOLICYPROPERTY. An empty string value "" indicates no script is invoked. The system does not check whether a non-empty path string actually refers to an executable script prior to attempting to run the script. The default value of this property is the empty string "".
This property is present only for the root node.
- backup-script-run-as => string, optional
The backup script is run on the management station by the user with this username. The value of this property is used only if the management system is running on a Unix host. On Windows the script is always invoked using the LocalSystem account. The username consists of 0 to 64 characters. If the length of a backup-script-run-as passed as input to dp-policy-modify is longer than 64 characters, then the ZAPI fails with error code EINVALIDPOLICYPROPERTY. The system does not check whether a user with this username actually exists prior to attempting to run the script.
The default value of this property is the empty string "". If the property value is "", then on Unix the script is run by the superuser ("root").
This property is present only for the root node.
- daily-retention-count => integer, optional
Minimum number of backups from the daily retention class that the system retains on this node. Range: [0..252]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- daily-retention-duration => integer, optional
The age, in seconds, after which a backup from the daily retention class expires on this node. Range: [0..2^31 - 1]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- failover-script-path => string, optional
Absolute path on the management station to the script to invoke both before and after breaking mirrors during a failover operation. The path consists of 0 to 255 characters. If the length of a failover-script-path passed as input to dp-policy-modify is longer than 255 characters, then the ZAPI fails with error code EINVALIDPOLICYPROPERTY. An empty string value "" indicates no script is invoked. The system does not check whether a non-empty path string actually refers to an executable script prior to attempting to run the script. The default value of this property is the empty string "".
This property is present only for the root node.
- failover-script-run-as => string, optional
The failover script is run on the management station by the user with this username. The value of this property is used only if the management system is running on a Unix host. On Windows the script is always invoked using the LocalSystem account. The username consists of 0 to 64 characters. If the length of a failover-script-run-as passed as input to dp-policy-modify is longer than 64 characters, then the ZAPI fails with error code EINVALIDPOLICYPROPERTY. The system does not check whether a user with this username actually exists prior to attempting to run the script.
The default value of this property is the empty string "". If the property value is "", then on Unix the script is run by the superuser ("root").
This property is present only for the root node.
- hourly-retention-count => integer, optional
Minimum number of backups from the hourly retention class that the system retains on this node. Range: [0..252]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- hourly-retention-duration => integer, optional
The age, in seconds, after which a backup from the hourly retention class expires on this node. Range: [0..2^31 - 1]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- id => integer
Identifier of the node. This is assigned by the system, and therefore may not be modified. The node at the root of the graph is id 1. Range: [1..2^31 - 1].
- monthly-retention-count => integer, optional
Minimum number of backups from the monthly retention class that the system retains on this node. Range: [0..252]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- monthly-retention-duration => integer, optional
The age, in seconds, after which a backup from the monthly retention class expires on this node. Range: [0..2^31 - 1]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- name => dp-policy-node-name
Name of the node. Each node has a name that is unique within its policy, but nodes in different policies may share the same name. If the length of a name passed as input to dp-policy-modify is longer than 64 characters, then the ZAPI fails with error code EINVALIDPOLICYPROPERTY.
- snapshot-schedule-id => obj-id, optional
An object ID for the snapshot schedule for this node. The snapshot schedule specifies when to create local point-in-time snapshot images on the storage elements that map to this node. The value of snapshot-schedule-id may be 0, in which case no snapshot schedule is set for this node. If no snapshot schedule is set, no local snapshots are created. If the value is not 0, it must be the ID of a DP schedule object.
If both snapshot-schedule-id and snapshot-schedule-name appear in the input to dp-policy-modify, then snapshot-schedule-id determines the ID of the snapshot schedule, and the value of snapshot-schedule-name is ignored. If neither snapshot-schedule-id nor snapshot-schedule-name appear in the input to dp-policy-modify, then the default value is an ID of 0 (equivalent to a name of ""), which means no snapshot schedule is set.
Both snapshot-schedule-id and snapshot-schedule-name always appear in the output from dp-policy-list-iter-next for the root node of a policy.
This property is present only for the root node. This is because all data on a non-root node comes from mirror or backup connections, and mirroring and backups create their own snapshots. Therefore there is no reason for a non-root node to generate additional snapshots on its own schedule, because they would contain no new data.
If a policy has been applied to a dataset whose root storage set contains OSSV directories, then you cannot set a snapshot schedule for the root node of the policy. This restriction exists because OSSV hosts cannot create local snapshots.
- snapshot-schedule-name => obj-name, optional
An object name for the snapshot schedule for this node. The snapshot schedule specifies when to create local point-in-time snapshot images on the storage elements that map to this node. The value of snapshot-schedule-name may be the empty string (""), in which case no snapshot schedule is set for this node. If no snapshot schedule is set, no local snapshots are created. If the value is not "", it must be the name of a DP schedule object.
If both snapshot-schedule-id and snapshot-schedule-name appear in the input to dp-policy-modify, then snapshot-schedule-id determines the ID of the snapshot schedule, and the value of snapshot-schedule-name is ignored. If neither snapshot-schedule-id nor snapshot-schedule-name appear in the input to dp-policy-modify, then the default value is a name of "" (equivalent to an ID of 0), which means no snapshot schedule is set.
Both snapshot-schedule-id and snapshot-schedule-name always appear in the output from dp-policy-list-iter-next for the root node of a policy.
This property is present only for the root node. This is because all data on a non-root node comes from mirror or backup connections, and mirroring and backups create their own snapshots. Therefore there is no reason for a non-root node to generate additional snapshots on its own schedule, because they would contain no new data.
If a policy has been applied to a dataset whose root storage set contains OSSV directories, then you cannot set a snapshot schedule for the root node of the policy. This restriction exists because OSSV hosts cannot create local snapshots.
- weekly-retention-count => integer, optional
Minimum number of backups from the weekly retention class that the system retains on this node. Range: [0..252]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
- weekly-retention-duration => integer, optional
The age, in seconds, after which a backup from the weekly retention class expires on this node. Range: [0..2^31 - 1]. The default value of this property is 0. This property is present only for root and backup secondary nodes.
For more details, see the description of dp-policy-node-info.
Name of a node in a DP policy graph. This typedef is an alias for the builtin ZAPI type string. A node name may contain from 1 to 64 characters. It may start with any character and may contain any combination of characters, except that it may not consist solely of decimal digits ('0' through '9'). The name of each node in a DP policy must be unique, but nodes in different policies may share the same name. Node names are always compared in a case-insensitive manner. This means that, for example, "a" and "A" are considered to be the same name for purposes of: creating new nodes for a new policy, renaming nodes of an existing policy, or looking up a policy's nodes by name. On the other hand, ZAPIs that return node names do not change the capitalization at all. For example, if a node's name has been set to "Backup", then dp-policy-list-iter-next returns its name as "Backup".
Note that a node name has the same format as an obj-name, and has similar properties such as case-insensitivity. However, a node name is not a kind of obj-name because a DP policy node is a part of a DP policy object, but is not itself a first-class DFM object.
Fields
Information about a single SnapVault or SnapMirror relationship. Currently, there are three kinds of relationships: SnapVault, Qtree SnapMirror (QSM) and Volume SnapMirror (VSM).Fields
- dataset-id => obj-id, optional
ID of the dataset this relationship is protecting. Not present if this relationship is not protecting a component of a dataset.
- dataset-name => obj-name, optional
Name of the dataset this relationship is protecting. Not present if this relationship is not a protecting a component of a dataset.
- deleted-timestamp => dp-timestamp, optional
The time and date when the relationship was deleted.
- destination-type => relationship-endpoint-type
Type of the destination storage object. SnapVault and Qtree SnapMirror relationships always terminate at a Qtree. Volume SnapMirror relationships always terminate at a Volume.
- is-dp-ignored => boolean
True if the source object for this relationship is being ignored by Protection Manager. See the discussion section for the meaning of "ignoring" objects.
- is-dp-imported => boolean
True if this relationship was at some earlier time imported into a Protection Manager dataset. A relationship may be marked as imported and dp managed and yet not be in a dataset.
- is-dp-managed => boolean
True if this relationship is set to be managed by Protection Manager. See the discussion section for the meaning of "being managed".
- is-dp-orphan => boolean
True if this relationship is considered an orphan by Protection Manager. An orphan relationship is a relationship that is managed by Protection Manager and was once part of a dataset but is no longer contained in a dataset.
- is-dp-redundant => boolean
True if this relationship is considered to be a redundant relationship by Protection Manager. A redundant relationship duplicates the need of a dataset connection by having it's source and destination between the same two nodes as another relationship in the same dataset. It is possible to have more than one redundant relationship in the same dataset.
- is-in-dataset => boolean
True if the source object for this relationship matches a dataset. A relationship is considered to match a data set if:
- the source object is a member of a storage set
- the destination object is a member of a storage set
- both storage sets are assigned to adjacent nodes of a single dataset.
If the relationship does not match the above criteria, it is not considered part of a dataset.
- lag => integer
Seconds since the completion of the last successful transfer to the destination. This value is undefined and will be returned as 0 for Unintialized relationshaps. Range: [0..2^31-1]
- lag-status => obj-status
Lag status of the relationship lag versus the DP policy lag threshold settings. Possible values are "unknown", "normal", "warning" and "error".
- relationship-id => obj-id
Identifier of the relationship.
- relationship-state => relationship-state
State of the relationship
- relationship-type => relationship-type
Type of the relationship.
- source-type => relationship-endpoint-type
Type of the source storage object. SnapVault relationships originate at either a Qtree or OSSV Directory. Qtree SnapMirror relationships originate at a Qtree. Volume SnapMirror relationships originate at a Volume.
Status of the relationship. Possible values are "idle", "transferring", "pending", "aborting", "migrating", "quiescing", "resyncing", "waiting", "syncing", "in_sync" or "paused". - idle: No data is being transferred.
- transferring: Transfer has been initiated, but has not yet finished, or is just finishing.
- pending: The secondary storage system cannot be updated because of a resource issue; the transfer is retried automatically.
- aborting: A transfer is being aborted and cleaned up.
- migrating: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships.
- quiescing: The specified volume or qtree is waiting for all existing transfers to complete. The destination is being brought into a stable state.
- resyncing: The specified volume or qtree is being matched with data in the common Snapshot copy.
- waiting: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships. SnapMirror is waiting for a new tape to be put in the tape device.
- syncing: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships.
- in_sync: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships.
- paused: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships.
Fields
Type of an object at either the source or destination of a data protection relationship. Values for source objects are "ossv_directory", "volume" or "qtree". Values for destination objects are "volume" or "qtree".Fields
State of the relationship. Possible values are "uninitialized", "snapvaulted", "snapmirrored", "broken_off", "quiesced", "source", "unknown" or "restoring" - uninitialized: The destination storage volume or qtree is not yet initialized or is being initialized.
- snapvaulted: Valid only in case of 'snapvault' relationships. The relationship is created and the qtree is a SnapVault secondary destination.
- snapmirrored: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships. The destination volume or qtree is in a SnapMirror relationship.
- broken_off: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships. The destination was in a SnapMirror relationship, but a snapmirror break command made the volume or qtree writable. This state is reported as long as the base Snapshot copy is still present in the volume. If the Snapshot copy is deleted, the state is listed as "uninitialized" if the destination is in the /etc/snapmirror.conf filen then the Snapshot copy is no longer listed. A successful snapmirror resync command restores the snapmirrored status.
- quiesced: Valid only in case of 'qtree_snapmirror' or 'volume_snapmirror' relationships. SnapMirror is in a consistent internal state and no SnapMirror activity is occurring. In this state, you can create Snapshot copies with confidence that all destinations are consistent. The snapmirror quiesce command brings the destination into this state. The snapmirror resume command restarts all SnapMirror activities.
- source: This state is reported when the snapvault status or snapmirror status command is run on the primary storage system. When the destination is on another system, its status is unknown, so the source status is reported. In case of 'snapvault' relationships, it also appears if snapvault status is run on secondary storage systems after the snapvault restore command was run on an associated primary storage system.
- unknown: The destination volume or the volume that contains the destination qtree is in an unknown state. It might be offline or restricted
- restoring: Valid only in case of 'snapvault' relationships.
Fields
Type of a data protection relationship. Legal values are "snapvault", "qtree_snapmirror" and "volume_snapmirror".Fields
Information about a path inside a dataset.Fields
- member-name-or-id => obj-name-or-id
Name or id of the dataset component that is at the root of the file tree. This may be qtree, ossv directory or volume id that is source of the physical data protection relationship.
- path => string
Name of the path. The maximum path length is 32767 characters. This path is relative to "member-name-or-id" path in the storage system or OSSV host. Path cannot be empty.
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data being restored originated. This is ignored if primary-snapshot-unique-id is specified.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data being restored originated. Currently, this is the Snapshot copy's creation time. This is used to differentiate between multiple copies of the root object (specified by member-name-or-id input) in the same backup. If not specified and different versions of the root object exist in two Snapshot copies in the backup, then the restore might be ambiguous.
- restore-configuration => boolean, optional
Valid only when an OSSV directory being restored is a Virtual Machine. If false the configuration of the Virtual Machine is not changed, only disks are restored. If true, the configuration and data are restored. Default value is true.
Information about a member inside a dataset.Fields
- destination-host => obj-name-or-id
Host to restore this dataset member to. The destination host must be compatible with the member type of the member being restored. For example, if the member is a volume, then this destination host must be a storage system.
- destination-path => string, optional
Path to restore this dataset member to. The path will be interpreted relative to destination-host.
If the destination host is a storage system, the path must not already exist. If the destination host is an OSSV, the path will be overwritten. In case of a Virtual Machine restore to the original location: - The path must either be an empty string or not be specified.
- The restore will happen through the destination host.
The format of this value depends on the host type of destination-host. If destination-host is a storage system, then it should be of the form "/vol1/[qtree1][/dir1..]". If a full volume restore is being specified, then the destination path must be a volume "/vol". If a full qtree restore is specified, and if the destination is a storage system, destination path can be either "/vol" or "/vol/qtree" and if it is "/vol", the server automatically appends the qtree name to the destination path to turn it into "/vol/qtree" format. If destination-host is an OSSV host, then it should be a pathname appropriate for the OSSV host. A Windows host will accept paths such as "C:\My Documents" and a UNIX host will accepts paths such as "/home/user". If the specified dataset-name-or-id is of an application data set, all the above rules apply. In addition, the files will be restored under a uniquely named directory under the destination path. The directory name is created by adding together three separate pieces of information (listed below) separated by a "_" character:
- Name of the primary Snapshot where data originated. All characters except [A-Z], [a-z], [0-9], '.', '-', '_', ')' and '(' are stripped from the Snapshot name and the name is trimmed to the first 42 characters before it is used. The 42 character limit ensures that the total length of the uniquely named directory does not exceed 64 characters.
- The DFM identifier of the primary qtree where the data originated.
- The DFM identifier of the Snapshot. This is currently the timestamp when the Snapshot was created.
The maximum destination length is 32767 characters. In some cases, shorter destination lengths will fail due to limitations of underlying storage systems. Default value is an empty string.
- destination-path-skip-extra-directory => boolean, optional
If this input is specified and is true, no extra directory will be added to the destination path. See destination-path for more information on how an extra directory is added to the destination path for an application dataset. If this input is true, file/ directory names may collide. In case of name collision, an error will be thrown indicating which paths are colliding. This input can then be set as false or repeat API invocation which includes colliding source path(s) to be restored to an alternative destination path Default is false.
- member-name-or-id => obj-name-or-id
Name or identifier of the dataset component that is at the root of the file tree. This may be qtree, ossv directory or volume id that is source of the physical data protection relationship.
- path => string
Name of the path. The maximum path length is 32767 characters. This path is relative to "member-name-or-id" path in the storage system or OSSV host. Path cannot be empty. "/" is considered as full member restore when the member is a volume, qtree or OSSV directory.
- primary-snapshot-name => string, optional
Name of the Snapshot copy on the primary where the data being restored originated. This is ignored if primary-snapshot-unique-id is specified.
- primary-snapshot-unique-id => string, optional
Unique id of the Snapshot copy on the primary where the data being restored originated. Currently, this is the Snapshot copy's creation time. This is used to differentiate between multiple copies of the root object (specified by member-name-or-id input) in the same backup. If not specified and different versions of the root object exist in two Snapshot copies in the backup, then the restore might be ambiguous.
- restore-configuration => boolean, optional
Valid only when an OSSV directory being restored is a Virtual Machine. If false the configuration of the Virtual Machine is not changed, only disks are restored. If true, the configuration and data are restored. Default value is true.
Hyper-V specific restore settings.Fields
- start-vm-after-restore => boolean, optional
Bring the VM online after restore. This option is applicable only when restoring a VM and will be ignored otherwise. If this input is not specified, it will default to false.
Various settings for a virtual object to be restored.Fields
A list of virtual objects to be restored, and per object settings for this restore operation.Fields
- restore-object-name-or-id => obj-name-or-id
Name or ID of the virtual object to restore. The object must be present in the specified backup and must be restorable. If the object is not restorable, error EOBJECTNOTRESTORABLE will be returned.
Various settings for the operation to restore the virtualization objects.Fields
- restore-script => string, optional
Name of the script to be invoked on the Host Services station both before and after the restore. The restore-script consists of 0 to 255 characters. The default value of this property is the empty string "".
An empty string value "" indicates no script is invoked. The system does not check whether a non-empty path string actually refers to an executable script prior to attempting to run the script. For example, %env%\scripts\restore.ps1 OR c:\program..\HS\scripts\restore.ps1 OR k:\program..\HS\scripts\restore.ps1 [k is a network share] OR UNC path \\SCRIPTSSVR\share\scripts\restore.ps
Various settings for a VMWare object to be restored.Fields
VMware specific restore settings.Fields
- esx-server-name-or-id => obj-name-or-id, optional
Name or ID of the ESX server to mount the backup to during restore. If this input is not specified, it will not be sent to the VMWare plugin, and it is up to the discretion of the plugin to choose its behavior.
- start-vm-after-restore => boolean, optional
Bring the VM online after restore. This option is applicable only when restoring a VM and will be ignored otherwise. If this input is not specified, it will default to false.
Information about the member and the destination.Fields
- file-path => string
destination path where the file was being restored. The path includes the name of the file being restored.
- overwrite-status => string
For ONTAP version below 7.3.0, we cannot check for overwrite. In that case, this element will return a predefined string indicating overwrite status could not be checked. For other cases, the overwrite status will be returned. Possible values : "overwrite", "overwrite_check_unsupported"
Information about the destination volume which does not have enough space for the restoreFields
- dest-host => string
Name of the destination host where the restore would take place.
The attributes of a day listFields
Description of a DFM object using the DP throttle.Fields
- assignee-full-name => obj-full-name
Full name of a DFM object.
- assignee-id => obj-id
Identification number of a DFM object.
Attributes of a throttle scheduleFields
- is-modifiable => boolean, optional
If false, this throttle is one of the sample throttle that is created at installation time, therefore it may not be modified, renamed or destroyed. If true, it is not one of the sample throttles, therefore it may be modified, renamed or destroyed. is-modifiable always appears in the output. It is not possible to use it as input.
- throttle-description => string, optional
Description of the throttle. It may contain from 0 to 255 characters. The description always appears in the output. If the description is omitted, then the default value is the empty string "".
The attributes of a throttle itemFields
- end-hour => integer
End hour of the throttle item. Range: [0..23]
- end-minute => integer
End minute of the throttle item. Range: [0..59]
- item-id => integer, optional
Identifier for the throttle item. Ignored in input by all zapi calls. Will be present in output. Range: [1..(2^31)-1]
- start-hour => integer
Start hour of the throttle item. Range: [0..23]
- start-minute => integer
Start minute of the throttle item. Range: [0..59]
Result of action taken on event. Timestamp returned on success, and error code on failure.Fields
- error-message => string, optional
Error message returned from event acknowledge/delete. Absent on success.
- event-id => integer
The input event identifier. Range: [0..2^32-1]
- timestamp => integer, optional
Timestamp when the event was acknowledged/deleted. Timestamps absent for IDs that can not be found, or have already been acknowledged/deleted. Range: [0..2^32-1]
Denotes the kind of application the event is for. Possible values: 'monitoring', 'data_protection' , 'performance' , 'performance_diagnosis'
Fields
Event identifier. Range: [1..2^32-1]Fields
Event information structureFields
- event-arguments => key-value-pair[], optional
Argument list for this particular event. Present only if include-event-arguments was set to true in the event-list-iter-start call. If the event has no arguments, this element will be empty. For example, some possible arguments are jobId, backupJobId, protectionJobId, or datasetId (the values for these all integer ids). The arguments returned are dependent on the event type and status. The list of possible arguments is variable with each version and large which is why it is not included here.
- event-id => integer
Id of the event. Range: [1..2^31 - 1]
- event-name => string
Name of the event. The list of all event names can be obtained using eventclass-list APIs. The element eventclass-info -> event-names[] -> event-name-pretty gives the name of an event.
- event-originating-id => integer, optional
This is only returned if the event-type is object-deleted. It's the ID of the deleted object. And in this case, the event-source-id is the management station ID. Range: [1..2^31 - 1]
- event-source-type => string
Type of object that generated the event. Possible values:
unknown
resource_group
host
aggregate
volume
qtree
interface
administrator
network
mgmt_station
configuration
quotauser
initiator_group
lun_path
fc_switch_port
fcp_target
directory
hba
fcp_initiator
san_host_cluster
srm_path
mirror
script
script_schedule
script_job
role
data_set
storage_set
resource_pool
dp_policy
dp_schedule
dp_throttle
ossv_directory
prov_policy
vfiler_template
disk
port
- event-type => string
Type or class to which the event belongs to. A list of event types can be obtained by using eventclass-list-iter APIs. The element eventclass-info -> event-class-name in eventclass-info gives the name of an event type.
range of event timestampsFields
- start-time => integer
Start timestamp, in seconds elapsed since midnight on January 1, 1970.(UTC)
Array of event filters.Fields
Information about an event class.Fields
- about-message => string
Description of the event class.
- event-class-id => integer
Database identifier of the custom event class or 0 in case of a canned event class.
- event-class-name => string
event class name.
- is-allow-duplicates => boolean
Event service will not drop duplicate events of this event class if is-allow-duplicates is true. Event is duplicate if it has same event-name as previous event with same event-class and the same event-source. It is false by default.
- is-multi-current => boolean
Event service keeps multiple current events of this event-class for each event-source. Valid only with allow-duplicates. It is false by default.
Custom event class.Fields
- event-class-name => string
Custom event class name or its database identifier.
Information about an event name.Fields
- severity => string
Severity of the event name. Possible values: Emergency, Critical, Error, Warning, Information, Normal.
Information of about one target.Fields
- host-id => obj-id
Identifier of the Storage System on which the target is present.
- host-name => string
DNS name of the Storage System on which the target is present.
- target-status => string
Operation status of the target,Possible values: "startup","uninitialized","initializing_fw", "link_not_connected","waiting_for_link_up", "online","link_disconnected","resetting", "offline","offlined_by_user_system", "unknown".
The sample values of a line in a graph.Fields
- sample-values => string
A comma separated list of timestamp:value pairs. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. The values may have optional decimal extensions, for example 1064439599:127, 1064439600:98.6, 1064439601:12. If the value of the sample is not available for a particular time, then the value returned will be empty e.g. (1188779400:).
Information about a single group.Fields
- group-status => obj-status
Status of the group based on all events
- has-privilege => boolean
Indicates if user has the privilege checked by group-list-iter-start API. If TRUE, user has the requested privilege on this group. If FALSE, user does not have the requested privilege, but group is included because user has the requested privilege on one or more subgroups of this group. Group privileges are set using the RBAC APIs.
- id => integer
Numeric identifier of the group.
Range: [0..2^31-1]
- name => string
Name of the group.
- short-name => string
Short name of the group. This is a short name which does not include name of the parent.
- type => string
The type of the members of the group. Multiple values are separated by commas. Possible values: Empty, Hosts, Volumes, Qtrees, Configurations, Lun Paths, SRM Paths, Aggregates, Datasets, and Resource Pools.
Name or ID of a group. If a group name is specified, it must be fully qualified.Fields
nullFields
Name or ID of an object.Fields
group name or identifierFields
Additional named attributes of the DFM object Current attributes are: 'OS Version', 'OS Revision', and 'Primary Address'Fields
- name => string
name of the attribute.
nullFields
- name => string
DFM name of the DFM object
- sub-type => string, optional
For members of type Host, this is the kind of host; possible values: filer, vfiler, agent, ossv, cluster, vserver. For all other members, this field is absent.
- type => string
Type of the DFM object which can be one of Host, Aggregate, Volume, Qtree, Configuration, Initiator Group, Lun Path, FCP Target, FCP Initiator, SRM Path, Resource Pool, Dataset, App Object or Host Service.
Description of an access object. An access object can be a role, usergroup, local user, or a domain user. Description of an access object can include any alphanumeric character, a space, or a punctuation character other than :(colon). Maximum Length: 128 charactersFields
Identification number (ID) for an access object. An access object can be a role, usergroup, local user or a domain user. The ID for an access object is always assigned by the DFM system. This typedef is an alias for the built-in type integer. Access object IDs are unsigned integers in the range [1..2^31-1].Fields
Name of an access object. An access object can be a role, usergroup, local user, or a domain user. The rules defined here are not applicable for domain users and usergroups. An access object can contain between 1 and 32 characters and include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |Fields
Name or id of an access object. An access object can be a role, usergroup, local user, or a domain user. This must conform to the format of either access-object-name or access-object-id.Fields
Name of a capability on the host. This must conform to one of the following formats: - "*"
- "login-*"
- "security-*"
- "cli-*"
- "api-*"
Here, instead of *, commands and sub-commands can be specified directly. Maximum Length: 64 charactersFields
Name of the user in domain\username format. (Ex:NETAPP\rohan) Maximum Length: 288 characters (Domain name can contain up to 255 characters, and username can contain up to 32 characters)Fields
Name or id or the SID of domain user. This must conform to one of the following: - domainuser-name
- sid
- access-cotrol-id
Fields
A full set of ndmp credentials with no empty fields which is a specific requirement for this zapi since ndmp-username and ndmp-password are both required in order to add an ossv host.Fields
- ndmp-username => ndmp-username
User name for logging into the host.
The default values for the attributes defined by the host.Fields
- perf-advisor-transport-host => string
The default value for the transport setting for communicating to the host for collecting performance data. Valid only for storage systems. Possible values: "http_only", "https_ok"
- snapvault-max-backup-threads => integer
Default maximum number of threads the DFM can use to coordinate in parallel the backup transfers for different backup relationships to the same secondary volume. This field is applicable only to storage systems that are SnapVault Secondary hosts. Range: [1..144]
- timeout => integer
Default value of timeout we use in zapis that accept timeout arguments. Time is in seconds. Range: [1..100]
- use-hosts-equiv => boolean
The setting for communicating to the host using hosts.equiv authentication. Valid only for storage systems and vFiler units.
Describes the contents of a domain user on the host. The host can be a Storage System or a vFiler unit. Output will always contain all the elements present in the type definition.Fields
- host-domainuser-id => access-object-id
Internal id of domain user on the host.
- host-id => obj-id
Id of the host on which the domain user is present.
- host-name => obj-name
Name of the host on which the domain user is present.
Data Protection specific information for this iterator. Default is false.Fields
- is-dp-ignored => boolean, optional
If true, only list aggregates that have been set to be ignored for purposes of data protection. If false, only list aggregates that have not been set to be ignored for purposes of data protection. If not specified, list all aggregates without taking into account whether they have been ignored or not.
- is-in-dataset => boolean, optional
If true, only hosts in a dataset are listed Default is false.
- is-unprotected => boolean, optional
If true, only list hosts which are not protected, which means it is: - 1. not in any resource pool.
- and 2. not a member of a node in a dataset with protection policy assigned.
If false or not set, list all hosts.
Data Protection specific information for this iterator.Fields
- datasets => dataset-reference[]
List of datasets the host is a member of. If is-in-dataset is false, this list will be empty.
- host-dataset => obj-name, optional
Name of the dataset the host is a member of. This element will not be present if the host is not a member of a dataset. This field is deprecated in favor of datasets, which lists all datasets the host belongs to. It is still populated with one of the dataset ids.
- is-dp-ignored => boolean
Indicates if an administrator has chosen to ignore this host for purposes of data protection.
- is-in-dataset => boolean
Indicates if this host is a member of any dataset.
- ndmp-access-specifier => string, optional
This is the access expression ndmp uses to determine who has access. Valid only for storage systems. See na_protocolaccess(8) for access specifier syntax and usage. Examples of valid values include "all", or "host=abc,xyz AND if=e0". See na_protocolaccess(8) for access specifier syntax and usage. Length: [0..255]
- snapmirror-access-specifier => string
This is the access expression snapmirror uses to determine who has access. Valid only for storage systems. Examples of valid values include "all", or "host=abc,xyz AND if=e0". See na_protocolaccess(8) for access specifier syntax and usage. Length: [0..255]
- snapvault-access-specifier => string
This is the access expression snapvault uses to determine who has access. Valid only for storage systems. Examples of valid values include "all", or "host=abc,xyz AND if=e0". See na_protocolaccess(8) for access specifier syntax and usage. Length: [0..255]
- snapvault-max-backup-threads => integer, optional
Maximum number of threads the DFM can use to coordinate in parallel the backup transfers for different backup relationships to the same secondary volume. If this element is present and empty, the global default value is used. This field is applicable only to SnapVault Secondary hosts. Use host-get-defaults to determine the global default value. Range: [1..144]
Data protection specific host information.Fields
- is-dp-ignored => boolean, optional
True if an administrator has chosen to ignore this host for purposes of data protection. The default value is false. If not present, the current setting is not modified.
- snapvault-max-backup-threads => integer, optional
Maximum number of threads DFM can use to coordinate in parallel the backup transfers for different backup relationships to the same secondary volume. This field is applicable only to SnapVault Secondary hosts. If the element is present and empty or the value is 0, the default value set in the global options will be used. If not present, the current setting is not modified. Range: [0..144]
DFM host identifier. Range: [1..2^31-1]
Fields
Host's information.Fields
- admin-port => ip-port-number
The host's administrative port for executing Manage OnTap APIs. Valid only for Storage Systems, NetCache appliances, and Host Agents. If this element is present and empty, the global default value is used. Use host-get-defaults to determine the global default value.
- admin-transport => string
The transport used for communicating to the host. Possible values: "http", "https". If this element is present and empty, the global default value is used. Use host-get-defaults to determine the global default value.
- host-credentials-status => string, optional
The current status of the Storage System or Host Agent credentials. Possible values: "not_applicable", "unknown", "bad", "good", "read_only" (Storage System will never have a status of "read_only"). This value reflects whether the credentials were validated or invalidated by the Storage System or Host Agent. "unknown" means that the DFM has not been able to test the credentials (username/password) against the Storage System or Host Agent.
The value of this element will not change if we are unable to communicate with the host. The DFM can access the Host Agent if host-credentials-status is "good" or "read_only" and host-communication-status is "up".
If the Host Agent credentials are guest credentials, and have been validated by the Host Agent, the value will be "read_only". The value "not_applicable" applies to hosts (vFiler units only for now) where login credential status is not meaningful to the DFM.
- host-fqdn => string, optional
The fully qualified domain name of the host. Length: [1..255]
- host-name => string
This is the DFM name of the host. Length: [1..255]
- host-os => string
This is the OS version for the host. On OSSV Agents this is the Windows OS (Windows 2000, etc.) On all other hosts this is the software release Length: [1..16]
- host-perf-info => host-perf-info, optional
Current status of the host based on all events ndmp-credentials-status will not be updated if ndmp-communication-status is bad because the DFM is not able to connect to the ndmp agent on the storage system or OSSV agent to validate the credentials.
- host-status => string, optional
The current status of the host. This element displays the status of the host os, which is ONTAP in the case of storage systems, and is Windows, Linux etc. in the case of Host Agents or OSSV agents. Possible values: "unknown", "up", or "down".
- is-available => boolean, optional
Tells whether or not this host is up. For OSSV clients, if valid login credentials are not set, the is-available state will be false.
- licenses => license[]
List of licenses installed on the storage system. This will be returned only if the host-type is "filer".
- ndmp-agent-status => string, optional
The current status of NDMP. Possible values: "unknown", "up", or "down". Valid only for storage systems and OSSV agents. This status is more than the combined result of ndmp-communication-status and ndmp-credentials-status, since a value of "up" indicates the DFM monitor was successful in fully establishing the connection to the point that useful messages can be sent (this requires steps beyond exchanging credentials). If the value is "down" you need to look at ndmp-communication-status or ndmp-credentials-status. If those status values do not indicate problems, the problem may be occurring after credentials have been exchanged.
- ndmp-communication-timestamp => integer, optional
Date of the last attempt to establish a connection to validate the NDMP credentials. Valid only for storage systems and OSSV agents. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- ndmp-credentials-status => string, optional
The current status of the NDMP credentials. Valid only for storage systems and OSSV agents. Possible values: "not_applicable", "unknown", "good", or "bad". The value "not_applicable" applies to hosts (vFiler units only for now) where ndmp credential status is not meaningful.
- ndmp-timestamp => integer, optional
Date of the last successful connection (i.e. the last time ndmp-agent-status was "up"). Valid only for storage systems and OSSV agents. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- other-host-id => host-id, optional
A Host Agent and an OSSV Agent on the same machine will have different IDs. If this host is an OSSV Agent and DFM knows about a Host Agent on the same machine, or if this host is an Host Agent and DFM knows about a OSSV Agent on the same machine, this value is the ID of the other host.
- perf-advisor-transport => perf-advisor-transport, optional
The transport setting for communicating to the host for collecting performance data. Valid only for storage systems. If this element is present and empty, the global default value is used. Use host-get-defaults to determine the global default value.
- use-hosts-equiv => boolean, optional
The setting for communicating to the host using hosts.equiv authentication. Valid only for storage systems and vFiler units. If this element is present and empty, the global default value is used. Use host-get-defaults to determine the global default value.
Host information.Fields
- admin-port => ip-port-number, optional
The host's administrative port for executing APIs. Valid only for Host Agents or Storage Systems. If the port is being modified for a Host Agent operating under UNIX, the port number has to be greater than or equal to 1025.
- admin-transport => string, optional
The transport used for API communication with the host. Possible values are: 'http', 'https' Valid only for storage systems, ignored for others.
- host-address => string, optional
Primary IP address for the host. If not present, the current setting is not modified. Length: [0..39]
- ndmp-credentials => ndmp-credentials, optional
NDMP credentials for the host. Valid only for storage systems and OSSV agents. If not present, the current settings are not modified.
- perf-advisor-transport => perf-advisor-transport, optional
The transport setting for communicating to the host for collecting performance data. Valid only for storage systems.
- use-hosts-equiv => boolean, optional
The setting for communicating to the host using hosts.equiv authentication. Valid only for storage systems and vFiler units. If the element is present and empty, the default value set in the global options will be used. If not present, the current setting is not modified.
DFM host. A host can be a storage system, a vFiler unit, a switch, or an agent. Value can be a DFM object name (maximum 255 characters), a fully qualified domain name (FQDN)(maximum 255 characters), the ip address, or the DFM id [1..2^31-1].Fields
Password for logging in. Encrypted using 2-way encryption. Length: [0..64]
Fields
Performance specific information for this iteratorFields
- data-unavailable-reason => perf-status-error[], optional
This element is included only if is-data-available is set to false and indicates the reason(s) for failure.
- is-data-available => boolean
Returns if the performance data can be collected for this host This will be present only for storage systems and vFiler units.
- last-client-stats-collection-time => timestamp, optional
Present in output only. This specifies the last time per-client statistics were collected for the host. It will not be present if per-client statistics were never collected for the host from Performance Advisor. This element is returned for Storage Systems only.
- percentage-space-consumed => string, optional
Present in output only. Indicates the percentage of total space consumed by this host's performance data. This element will be returned only is include-perf-space-details is set to true. It will be returned for Storage Systems only.
- space-consumed => integer, optional
Present in output only. Indicates the amount of server storage currently consumed by the performance data in bytes. This element will be returned only is include-perf-space-details is set to true. It will be returned for Storage Systems only.
- space-projected => integer, optional
Present in output only. Indicates the maximum amount of server storage required to store performance data in bytes. This element will be returned only is include-perf-space-details is set to true. It will be returned for Storage Systems only.
Id of the role on the host. The ID for a host role is always assigned by the DFM system. Range: [1..2^31-1].
Fields
Describes the contents of a role on the host. The host can be a storage system or a vFiler unit. Output will always contain all the elements present in the type definition.Fields
- capabilities => capability[], optional
List of capabilities the role is allowed.
- host-id => obj-id, optional
Id of the host on which the role is present.
- host-name => obj-name, optional
Name of the host on which the role is present.
Name of the role on the host. A host role name contains between 1 and 32 characters and can include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |
Fields
Type of host. Possible values are: filer, vfiler, cluster, cluster-controller, vserver, agent, ossv, and switch.Fields
Describes the contents of a local user on the host. The host type can be a Storage System or a vFiler unit. Minimum password age, Maximum password age, and status are applicable for hosts running ONTAP versions 7.1 and above. Apart from these fields, output will always contain all the elements present in the type definition.Fields
- capabilities => capability[], optional
List of capabilities of the local user inherited from the usergroup(s) in which user is member of.
- description => access-object-description, optional
Description of the local user.
- host-id => obj-id, optional
Id of the host on which the local user is present.
- host-name => obj-name, optional
Name of the host on which the local user is present.
- host-role-ids => host-role-id[], optional
List of ids of roles contained by usergroup(s) in which the local user is member of.
- host-role-names => host-role-name[], optional
List of names of roles contained by usergroup(s) in which the local user is member of.
- host-usergroup-ids => host-usergroup-id[], optional
List of ids of usergroups in which the local user is member of.
- host-usergroup-names => host-usergroup-name[], optional
List of usergroup names on which the user is member of.
- password => host-user-password, optional
Password of the local user.
- status => string, optional
Status of the local user on the host. This element cannot be used as an input. Possible values: - "enabled"
- "disabled"
- "expired"
Password of the local user on the host. Encrypted using standard 2-way encryption. This must conform to the rules found in options "security.passwd.rules". By default, the password can contain 8 to 256 characters.Fields
Id of the usergroup on the host The ID for a host usergroup is always assigned by the DFM system. Range: [1..2^31-1].
Fields
Describes the contents of a usergroup on the host. The host can be a storage system or a vFiler. Output will always contain all the elements present in the type definition.Fields
- capabilities => capability[], optional
List of capabilities the usergroup has inherited from the member roles.
- description => string, optional
Description of the usergroup on the host. Maximum Length: 128 characters
- host-id => obj-id, optional
Id of the host on which the usergroup is present.
- host-name => obj-name, optional
Name of the host on which the usergroup is present.
- host-role-ids => host-role-id[], optional
List of ids of roles the usergroup contains.
- host-role-names => host-role-name[], optional
List of role names the usergroup contains.
Name of the usergroup on the host. A host usergroup name contains between 1 and 32 characters and can include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |
Fields
Name or id of usergroup. This must conform to the format of either usergroup-name or access-object-id.Fields
IP port number. Range: [0..2^16-1]
Fields
Name of the licensed Data ONTAP service.
Possible values: "nfs", "cifs", "iscsi", "fcp", "multistore", "a_sis", "snapmirror_sync".Fields
NDMP credentials for a host or a network. Valid only for storage systems and OSSV agents.Fields
- ndmp-port => ip-port-number, optional
NDMP port. Valid only for storage systems and OSSV agents. If not present, the current setting is not modified.
Name of NDMP user. Length: [1..32]
Fields
The transport setting for communicating to the host for collecting performance data. Data collection is disabled when this option is set to Disabled. Possible values: "http_only", "https_ok", "disabled"
Fields
Status indicating whether a service is up or down.Fields
SID (Security Identifier) describing a user. Length: [5..128] characters Format: S-1-5-21-int-int-int-rid RID is a unique random integer generated by storage system/vFiler unit.Fields
Name of a usergroup on storage system or vFiler unit. Usergroup name can contain between 1 and 256 characters and include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |Fields
Name or id of usergroup on storage system or vFiler unit. This must conform to the format of either usergroup-name or access-object-id.Fields
information of a vFiler unit. Avaiblable only if host-type is "vfiler".Fields
Ipspace of vFiler unit.Fields
Migration information for a vFiler unit.Fields
- destination-filer-id => obj-id
Database ID of the destination storage system
- destination-filer-name => obj-full-name
Full name of the destination storage system
- destination-vfiler-name => obj-full-name
Full name of the destination vFiler Unit
- migration-status => migration-status
Migration status of the vFiler unit
- source-filer-id => obj-id
Database ID of the source storage system
- source-filer-name => obj-full-name
Full name of the source storage system
- source-vfiler-id => obj-id
Database ID of the source vFiler unit
- source-vfiler-name => obj-full-name
Full name of the source vFiler Unit
Information about one vFiler unit IP addressFields
- prefix-length => integer
null
Information about a host service.Fields
- host-service-id => obj-id
Object Identifier of the Host Service.
- is-authorized => boolean
Indicates if host service registration authorized. false value indicates that the host service is registered but not yet authorized by the Dafabric Manager administrator. All the operations on host service like dicovery, configuration, backup and restore of virtual machine data are not allowed until the host service is authorized.
- needs-upgrade => string
Valid values are : 'unknown', 'yes','no', and 'running' 'yes' if a new version of Host Service package is registered with DataFabric Manager server and the current version of the Host Service is older than the newly registered Host Service package.
'unknown': if the upgrade required status can not be determined 'no': if there are no later packages added to DFM 'running' : if an upgrade is already in progress
- timezone-info => timezone-info
Timezone of the host, where the Host Service is installed. Returns an empty string, if Host Service does not return valid timezone. Currently valid time zones can be listed by timezone-list-info-iter.
Information about host service package.Fields
nullFields
storage system configuration.Fields
- login-protocol => string, optional
Protocol that host service needs to use when calling ONTAP APIs on the storage systems. Value values are 'http', 'https' and 'rpc'. If not specified, Default is 'https'.
Information about a storage system.Fields
- access-protocol => string, optional
Protocol used by host service to communicate with the storage system. Valid values are : http, https, rpc
- storage-system-id => obj-id
Object Identifier of the storage system in DataFabric Manager.
- storage-system-name => obj-name, optional
Name of the storage system.
Information of about one interface.Fields
- host-id => obj-id
Identifier of the Filer on which the interface is present.
- host-name => string
DNS name of the Filer on which the interface is present.
- ifc-id => obj-id
Identifier of interface in DFM Server database.
Range: [1..2^31-1]
- ipspace => string
Name of the IPSpace to which the interface belongs.
- is-vlan-capable => boolean, optional
Specifies if the interface is capable of supporting VLAN tagging. An interface will not support VLAN tagging if: - It is not VLAN capable (at present only some INTEL NICs are supported for VLAN tagging.)
- It is a physical interface on which a VIF is configured.
- It is an interface that is already accepting traffic.
- mtu-size => integer, optional
Interface mtu size.
- netmask => string, optional
Netmask configured for the interface. Empty in case the interface is unconfigured or configured with IPv6 address.
- partner-interface-id => obj-id, optional
Partner interface identifier. This element is present only when the storage system on which the interface is present is in an Active/Active configuration and there is a partner interface configured for this interface.
- partner-interface-name => obj-full-name, optional
Partner interface name. This element is present only when the storage system on which the interface is present is in an Active/Active configuration and there is a partner interface configured for this interface.
- prefix-length => integer, optional
Prefix length for the IP address configured for the interface. Empty in case the interface is unconfigured. In case of IPv4 address, it is number of netmask bits. Range: [1..127]
- status => string
Operation status of the interface, valid values are "up", "down", "testing", "unknown".
- type => string
Type of interface.
Possible values are "ethernet", "fddi", "loopback", "atm", "vif", "vlan", "unknown".
Information of one LDAP server contains ip address and port number. This uniquely identifies the LDAP server in DFM.Fields
Information of one LDAP server.Fields
- ldap-server => ldap-server
IP address and port details of the LDAP server.
Name and Id of an iGroup.Fields
- igroup-id => obj-id
Name of the iGroup.
- igroup-name => obj-name
Name of the iGroup.
Information about a lun.Fields
- host-id => obj-id
Identifier of host on which the lun resides. Always present in the output.
- host-name => obj-name
Name of host on which the lun resides. Always present in the output. The name is any simple name such as myhost.
- lun-path => obj-name
Path name of the lun including the volume or qtree where the lun exists. The name will be similar to myvol/mylun or myvol/myqtree/mylun.
- qtree-name => obj-name, optional
Name of qtree on which the lun resides. Present in the output only if the lun resides on a qtree. The name is any simple name such as myqtree.
- volume-id => obj-id
Identifier of volume on which the lun resides. Always present in the output.
- volume-name => obj-name, optional
Name of volume on which the lun resides. The name is any simple name such as myvol. volume-name is not returned if the lun belongs to a qtree on a vfiler and the authenticated admin does not have the required capability. For details of the required capability, see description of rbac-operation input element in lun-list-info-iter-start api.
Dry run results of migration of each individual volumeFields
- dry-run-results => dry-run-result[]
Results of a dry run. Each result describes one action the system would take and the predicted effects of that action.
- volume-id => obj-id
Identifier of the volume to migrate
Job information of volume migration job.Fields
- job-id => integer
Identifier of job started for the volume migration.
- volume-id => obj-id
Identifier of the volume to migrate
- volume-name => obj-name
Full name of the volume to migrate
Details of a volume migration request.Fields
- cleanup-stale-storage => cleanup-stale-storage, optional
Indicates when should the volumes in the source aggregate should be destroyed after successful migration. Default value is "cleanup_after_migration".
- destination-aggregate-name-or-id => obj-name-or-id, optional
Name or identifier of the destination aggregate to which all the volumes has to be migrated to. If the destination aggregate is not provided, the system will select a suitable aggregate from the resource pools associated with the dataset node to which the volumes belong to.
- retention-type => dp-backup-retention-type, optional
Retention type to which the backup version created should be archived for the backups created as part of running an on-demand update after successful migration. This element is ignored if run-on-demand-update is false. Default value is "daily".
- run-dedupe-scan => boolean, optional
Indicates whether a full deduplication scan has to be run on the new volume after migration. This option is applicable only for volumes that are enabled for deduplication and is useful to regenerate the fingerprint database used in deduplication and will be ignored for other volumes. Default value is false.
- run-on-demand-update => boolean, optional
Indicates whether an on-demand update has to be triggered after successful migration on the dataset to which the volumes that were migrated belong to. Default value is false.
- volumes => obj-name-or-id[]
Name or ids of one or more volumes to be migrated. Currently, all the volumes should belong to the same aggregate. The volumes, if they belong to a dataset should all belong to the same node of the dataset.
Type of migrating routes. Possible values are: 'static', 'persistent', 'none'.
- static: Static routes present in the IPSpace of the migrating vFiler unit will be migrated.
- persistent: Persistent routes present in /etc/rc file related to the migrating vFiler unit will be migrated.
- none: None of the routes will be migrated. Choosing this option may lead to non-disruptive migration, if the routes in the migrating vFiler unit are not present on the destination storage system. This option should be chosen only when all the routes are already present on the destination storage system. By default 'static' routes will be migrated.
Fields
A volume and aggregate pair.Fields
Information about one network interface.Fields
- bytes-in => integer
The total number of octets received on the interface, including framing characters.
Range: [0..2^63-1]
- bytes-out => integer
The total number of octets transmitted out of the interface, including framing characters.
Range: [0..2^63-1]
- discards-in => integer
The number of inbound packets which were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol. One possible reason for discarding such a packet could be to free up buffer space.
Range: [0..2^63-1]
- discards-out => integer
The number of outbound packets which were chosen to be discarded even though no errors had been detected to prevent their being transmitted. One possible reason for discarding such a packet could be to free up buffer space.
Range: [0..2^63-1]
- errors-in => integer
The number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol.
Range: [0..2^63-1]
- errors-out => integer
The number of outbound packets that could not be transmitted because of errors.
Range: [0..2^63-1]
- name => string
The name of the interface.
- packets-in => integer
The number of packets delivered to a higher-layer protocol, including unicast, multicast, and broadcast packets.
Range: [0..2^63-1]
- packets-out => integer
The total number of packets that higher-level protocols requested be transmitted, including those that were discarded or not sent.
Range: [0..2^63-1]
- speed => integer
The interface's current bandwidth in bits per second.
Range: [0..2^32-1]
Unique id representing network in DFM. Range: [0..2^31-1]Fields
Information of one network.Fields
- hop-count => integer
Represents hop count for the network. Zero will be returned if network is not discovered. Range: [0..2^31 -1]
- network-address => network-address
Address of the network.
- network-id => network-id
A unique id representing the network.
- prefix-length => prefix-length
The routing prefix length of the network.
Contains the CIFS operations performed by the client on the storage system.Fields
Describes an error encountered while collecting per-client statistics on the storage system or vfiler.Fields
- object-name => obj-name
The name of the vFiler unit or storage system for which this error was encountered.
Contains the per-protocol operations performed by a client on a storage system.Fields
Specifies all the attributes of a threshold on a single counter.Fields
- threshold-type => string, optional
Specifies whether an event is to be generated when the value of the counter is above ('upper') or below ('lower') the threshold-value. Possible values are 'upper' and 'lower'. Default is 'upper'
A unit in which a counter is measured. Possible values are per_sec, b_per_sec (bytes/s), kb_per_sec (kb/s), mb_per_sec (mb/s), percent, millisec (milliseconds), microsec (microseconds), sec (seconds) and none.Fields
Day of week. Possible values : 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday' or 'sunday'Fields
nullFields
- error-reason => string, optional
If the copy operation failed, this element will have the reason for the failure. This element will not be present when the copy operation was sucessful.
- host-id => obj-id
Identifier of the destination storage system.
Contains data for top-N objectsFields
- value => string
Value for the object Max Length is 32
Contains the per-client operations on a host.Fields
- object-id => obj-id
The id, in the DFM database, of the storage system for which these client statistics have been collected.
- object-name => obj-name
The name of the storage system for which the per-client statistics were collected.
- start-time => timestamp
The time at which the collection of per-client statistics was enabled, in seconds from Epoch time.
- stat-id => stat-id
The id, in the DFM datbase, of this collection of statistics.
- stats => client-stats[]
Contains the operations performed by the top 20 clients operating on the host. If 20 clients are not available, the operations for the maximum available clients will be present.
A specification for returning data for the given instances and counters.Fields
- instance-name => string, optional
For unmanaged objects, this will be the name of the instance. When this input is specified, the object-name-or-id should have the name or id of the storage system. Maximum length is 255
- object-name-or-id => obj-name-or-id, optional
If specified, all related instances of the object-type mentioned in the perf-object-counter under this object are returned. For example, if obj-name-or-id is a host and object-type in the perf-object-counter is qtree, this would mean all qtree under the host. If the obj-name-or-id is aggregate, all qtrees under the aggregate.
A performance counter, an object instance pair.Fields
- err-code => string
The string to specify the reason why this counter instance pair is invalid. Possible values are: "invalid_instance": The instance provided is invalid. "invalid_object_type": The object type provided is invalid. "no_instances_available": No instances found for the provided counter. "invalid_counter": The counter specified is not valid. "counter_unavailable": The counter specified is unavailable for this storage system. "incompatible_counter_instance": The counter and instance are incompatible (For example: data requested for a network interface counter but an aggregate is provided as instance).
- host-id => obj-id, optional
Name of the storage system to which the instance belongs to.
- instance-name => string, optional
Name of the instance.
Contains the computation performed and the valueFields
- id => string
Unique identifier to be set by the user to map the requested metrics to its value when multiple type of metrics are requested for the same counter. This field is not processed and returned as it is in the output. Length not more than 16 characters.
- period => integer, optional
Computation window in seconds. Valid only when slide field is specified as "step", "rolling" or "cumulative". One metric value is calculated for each window of size period. Range: [1..2^31-1]
- slide => string, optional
Defines the method to slide the compute window specified by period field. Possible values: - "simple": No slide. Compute a single metric for the entire time range.
- "rolling": Slide the window by one sample at a time. period field has to be specified.
- "step": Slide the window value equal to period. period field has to be specified.
- "cumulative": The window keeps growing by the value equal to period
Default value: "simple"
- type => string
The computation type Possible values: "mean": Mean of the data for the specified time period. "min": Minimum value "max": Maximum value "value_at_percentile": nth percentile of the samples, where n is specified in the value field.
- value => string, optional
Valid only when the type field is set to "value_at_percentile" Requested percentile value. This field accepts only positive real numbers as inputs. Range : (0..100]
Month of year. Possible values : 'january', 'february', 'march', 'april', 'may', 'june', 'july', 'august', 'september', 'october', 'november' or 'december'Fields
Contains the NFS operations performed by the client on the storage system.Fields
- other-operations => integer
Specifies operations that do not fall into either read or write categories. Range: [ 0 .. 2^31 - 1 ]
- read-operations => integer
The number of read operations performed by the client on the storage system. Range: [ 0 .. 2^31 - 1 ]
- write-operations => integer
The number of write operations performed by the client on the storage system. Range: [ 0 .. 2^31 - 1 ]
Associated object's informationFields
- object-id => obj-id
The object that is associated with the view
- object-type => string, optional
Type of the DFM object. The possible values are - "resource_group"
- "data_set"
- "resource_pool"
- "filer"
- "vfiler"
- "volume"
- "qtree"
- "lun"
- "aggregate"
- "interface"
Name of the object type. The possible values are - "resource_group"
- "data_set"
- "resource_pool"
- "filer"
- "vfiler"
- "volume"
- "qtree"
- "lun"
- "aggregate"
- "disk"
- "interface"
- "processor"
- "target"
Fields
Information about a chart. A chart is a display (graph) with one or more data sources (lines).Fields
- chart-name => string
Name of the chart. This is intended to be a user-visible label like "CPU Graph". It should be unique to the performance view.
- dynamic-data-sources => perf-dynamic-data-sources, optional
Lines representing dynamic members. Data for each data source is drawn as one line in the chart.
- maximum-y => integer, optional
An optional maximum value for the y axis. The units are arbitrary and determined by the client. This may be used to help guide the chart construction in the client.
- type => string, optional
Type of the chart. Arbitrary string meaningful to the client only.
Describes a performance counter (a measurable quantity on a performance object). This might be, for example, the number of writes per second on a specific volume.Fields
- base-counter => string, optional
Name of the counter used as the denominator to calculate values of counters involving percentages. For additional details, please refer to the ONTAP API perf-object-counter-list-info
- is-display => boolean, optional
If TRUE, this counter can be displayed to the user. If FALSE, this counter is a base counter used as the denominator to calculate values of counters involving percentages. The base counters should not be displayed to the user. Default value is TRUE.
- name => string
Name of the counter
- privilege-level => string, optional
The counter privilege level, can be "basic", "advanced" or "diag". Any counter with a privilege level of "diag" is not guaranteed to work, to exist in future releases, or to remain unchanged.
- type => string, optional
Type of the counter. For additional details, please refer to the ONTAP API perf-object-counter-list-info
- unit => string, optional
Unit of the counter. For additional details, please refer to the ONTAP API perf-object-counter-list-info
Value of the performance counter.Fields
- counter-data => string
A single Array of retrieved values counter data. The format is a series of comma-separated timestamp:value pairs. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. The values may have optional decimal extensions.
- label-names => string
A list of comma-separated label names to completely qualify the counter. This element will be empty for counters without labels.
- object-type => string
object-type is the name of the object classification as defined by ONTAP counter manager. Maximum length of 255 characters.
- unit => string, optional
Unit of counter-data. This element will not be present if the unit is not known. Some possible values are per_sec, percent, b_per_sec, kb_per_sec, msecs, usecs. Maximum Length: 32 characters.
A counter group. A counter group is a collection of measurable data sources coupled with an associated sampling rate and sample history.Fields
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-group-name => string
Name of the counter group.
- data-sources => perf-data-source[], optional
Identifiers for the data sources to be retrieved in later queries. The data-sources can not contain 'group' data source. i.e. instance-name must be specified for each perf-data-source. The data-sources should not contain a data source that specifies label-names field. This is because counter group always collects data for the whole counter i.e. for all labels.
- end-time => integer, optional
Present in output only. Indicates the timestamp of the last available data sample. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
- is-stopped => boolean, optional
Present in output only. Indicates whether data collection has been stopped for the counter group using perf-counter-group-stop API. When this field is false, it indicates server's intention to collect data for the counter group. But many other factors can affect whether data is actually collected or not.
- number-records => integer, optional
Present in output only. Indicates the number of records currently collected for the counter group and storage system. This information is sent in the output only if the host information is not consolidated
- perf-file-name => string, optional
Present in output only. This indicates the name of the file that contains the performance information for the counter group and storage system. This information is sent in the output only if the host information is not consolidated
- real-time => boolean, optional
Designates whether the counter group is real-time, in other words, if no user is getting data from the group, then the counter group will no longer get data. By default FALSE.
- sample-rate => integer, optional
Length of interval between samples, in seconds. Defaults to one minute.
- space-consumed => integer, optional
Present in output only. Indicates the amount of server storage currently consumed by the counter group data, in bytes.Range: [0..2^64-1].
- space-projected => integer, optional
Present in output only. Indicates the maximum amount of server storage required by the counter group data, in bytes. Range: [0..2^64-1].
- start-time => integer, optional
Present in output only. Indicates the timestamp of the first available data sample. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
Defines a perf counter for a specific instance.Fields
- instance-name => string, optional
For unmanaged object, this will be the name of the instance. In this case, object-name-or-id will be the id for the storage system Maximum length is 255
- object-name-or-id => obj-name-or-id
The object for which data is to be retrieved or the id of the parent storage system, if instance-name is specified
Definition of a set of data sources and a consolidation method.Fields
- data-group-method => string, optional
A function to apply across the data sources. Valid values are average (average of all sources), total (sum of all sources). The default is total (which for one data source is just the value of the single data source).
- data-sources => perf-data-source[]
Identifiers for the data sources to be retrieved in later queries. The data-sources should be either one 'group' data source, or many (one or more) of 'instance' data sources. If data-sources contains a data source that has an array counter, specify label-names field of perf-data-source to select a single element of the array.
An unique identifier for a source of data. When optional instance-name field is not specified, perf-data-source represents all relevant instances in a storage system or in a group.Fields
- appliance-name-or-id => string, optional
The name (or unique ID) of the source storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next. The API will accept a vfiler name or id from DFM 3.3 onwards.
- counter-name => string
The name of the counter measured. Counters are specifc measurable quantities on an instance of a performance object, for example the number of write operations on a volume.
- group-name-or-id => string, optional
The name (or unique ID) of the source group. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs. This field is ignored if appliance-name-or-id is specified. This field defaults to global group if appliance-name-or-id is not specified.
- instance-name => string, optional
The name of the specific object referenced (vol2, disk2, ...). If this field is specified, appliance-name-or-id must also be specified. If this field is not specified, it is considered as a wildcard that implies combining data for all instances of a storage system or a group.
- label-names => string, optional
An optional list of comma-separated label names. These represent an index into the array of values when a counter has more than one value (as implicitly indicated by the number of labels). The values should be the name of the label (for example 'getattr' for the 'nfs_v3_ops' counter). In some cases, the array of labels is multi-dimensional, in which case there should be one label name from each row in the array of labels. Each entry should be comma-separated. If unspecified, the default value is blank. In that case, its interpretation depends on the context.
- object-full-name => obj-full-name, optional
The object id for the specific object referenced
- object-name => string
The name of the referenced object. Object names are broad classes of system components and protocols, like NFS, VOLUME, DISK, ...
- object-name-or-id => obj-name-or-id, optional
The object id for the specific object referenced. For dynamic data sources, this will specify the container from which objects will be selected
- object-type => string, optional
Type of the DFM object. The possible values are - "filer"
- "vfiler"
- "volume"
- "qtree"
- "lun"
- "aggregate"
Describes a performance counter and its dependent counters list, for example system:load_inbound_mbps is dependent on system:net_data_recv, fcp:fcp_write_data, iscsi:iscsi_write_data counters.Fields
- counter-name => string
Name of the counter.
Diagnosis Category.Fields
- category-status => obj-status
Violation status of the category The possible status for a Diagnosis Category are "information" (If none of its health checks are violated), "warning" (If it contains only performance tips) and "error" (If atleast one of its health checks are violated). Details of 'Performance tips' shall not be included in the category output if it is not violated.
A specification for selecting top-n instances for a single counter. Data for each instance is drawn on the chart as one line. e.g. top 5 busy storage systems in a group.Fields
- color => string, optional
Color of the line. The values and their interpretation are determined by the client. Recommended usage includes common names like "black" or "blue", as well RGB hex specification like "AAFF00".
- data-source => perf-data-source, optional
Describes the properties of instances that should be considered for top-n query. The data-source must be 'group' data source i.e. instance-name field must be empty. If data-source has an array counter, specify label-names field to select a single element of the array.
- select-all-instances => boolean, optional
If true, this will pick all the instances being considered. If maximum-instances element is present and select-all-instances is true, then select-all-instances will be considered over maximum-instances. By default this is false.
- sort-order => string, optional
The sort order when comparing data from different instances. After sorting data in this order, top maximum-instances are picked. Valid values are "ascending" and "descending". Default is "descending".
A check which if violated could affect the performance of the given object.Fields
- recommendation => string
Recommendation to avoid the issue. This will be "N/A" if the health-check status is "information".
Describes an instance. An instance is a manifestation of a performance object. For example, the performance object might be "VOLUME" whereas the instance might be "vol0".Fields
Array of counter values of an instance.Fields
- instance-name => string, optional
Name of the instance to get counter values for.
- object-id => obj-id
Identifier of the object
When a counter has more than one value (an array), the labels describe the indvidual array entries, and define an implicit indexing scheme into the array. The array may be multi-dimensional, in which case each occurrence of a perf-label represents a dimension in the array of labels.Fields
- label-names => string
A comma-separated list of labels, representing a row in the array of labels.
A line of data on the chart. Often a line will have a single data source (e.g. CPU percent busy), but we allow for the capability for many sources to be combined (e.g. average CPU busy over 10 storage systems).Fields
- color => string, optional
Color of the line. The values and their interpretation are determined by the client. Recommended usage includes common names like "black" or "blue", as well RGB hex specification like "AAFF00".
- data-group => perf-data-group
The data source(s) and consolidation method for the line. If data-sources contains a data source that has an array counter, specify label-names field of perf-data-source to select a single element of the array.
- type => string
Type of display for the line. The values and their interpretation are determined by the client.
Counters not available with a particular host.Fields
- host-name-or-id => host-name-or-id
DFM host name.
A performance object. A performance object is a (somewhat abstract) representation of a class of measurable items. These are roughly akin to system subcomponents or protocols, for example "VOLUME", "DISK" or "NFS". These are broad classes and do not represent specific instances.Fields
- assoc-obj-type => perf-assoc-obj-type
Name of the DFM object to which the performance object is associated. Performance objects that have single instances such as "system", "cifs", "nfsv3", etc are associated with DFM object "filer". Other performance objects are associated with respective DFM object types.
- object-name => string
Name of the performance object.
Performance Counters can be a combination of an object and a counter.Fields
- label-names => string, optional
An optional list of comma-separated label names. These represent an index into the array of values when a counter has more than one value (as implicitly indicated by the number of labels). The values should be the name of the label (for example 'getattr' for the 'nfs_v3_ops' counter). In some cases, the array of labels is multi-dimensional, in which case there should be one label name from each row in the array of labels. Each entry should be comma-separated. If unspecified, the default value is blank. In that case, its interpretation depends on the context. Maximum length of 255 characters
- metrics => metric[], optional
Container of metric requested. Only perf-get-counter-data uses this field, for all other ZAPIs this field is ignored.
- time-consolidation-method => string, optional
This element will be used by perf-get-counter-data ZAPI only. A function to apply across the data for this counter to achieve the desired time resolution, if the requested sample-rate is different from the actual sample-rate of the data. Possible values : 'average' 'min' 'max' 'last' Default is 'last'
Performance Counters can be a combination of an object and a counter.Fields
- object-name-or-id => obj-name-or-id
object-name or object-id for which counter information is being collected.
Lists the effected counters deatils for a particular host.Fields
- host-name-or-id => host-name-or-id
DFM host name.
Lists the effected counters for each object along with retention period details.Fields
The description of a performance report.Fields
Describes the status of perf server.Fields
- server-status => string
Specifies the status of Performance Advisor. Possible values are "enabled" and "disabled". Status will be "enabled" only when both data-collection-status and object-update-status are enabled.
Possible resons for data unavailability.Fields
- error => string
The following are the possible reasons: 1) "perf-advisor-not-enabled" This indicates that the performance advisor is not enabled on DFM. 2) "host-bad-credentials" This indicates that the authetication credentials for the host are incorrect. 3) "host-no-credentials" This indicated that the host login is empty. 4) "host-not-reachable" This indicates that the host is down and hence not reachable. 5) "host-transport-incorrect" This indicates that the host transport is in compatible with the performance advisor transport. 6) "filer-os-version-less-than-6.5" This indicates that the storage system is running OS with a release version less than 6.5. 7) "filer-data-unavailable" This indicates that DFM has not discovered instances of performance objects on the storage system yet.
The description of a performance threshold.Fields
A performance view.Fields
- appliance-name-or-id => string, optional
Name (or unique ID) of the storage system. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in the appliance-info structure from the appliance-list-iter-next API. The API will accept a vfiler name or id from DFM 3.3 onwards.
- assoc-obj-type => perf-assoc-obj-type[], optional
Indicates all the object-type that the view provides information for. Could be more than one for custom views
- counter-group-name => string, optional
Name of the referenced counter group.
- group-name-or-id => string, optional
The group that this view is associated with. If a unique ID is supplied, it should originate from an API such as the 'id' field returned in group-list-iter-next APIs.
- is-default => boolean, optional
Indicates whether the view is a "default" performance view. Defaults to false.
- is-stopped => boolean, optional
Indicates whether data collection is stopped for the view. Defaults to false.
- sample-buffer => integer, optional
Length of sample buffer to retain, in seconds. If unspecified on input, the value will be chosen by the server.
- sample-rate => integer, optional
Length of interval between samples, in seconds. Defaults to one minute.
- view-name => string
Name of the view.
- view-type => view-type, optional
Indicates the type of view. If unspecified during view creation, it defaults to "custom_view".
Specifies conditions on resource properties. In case of values for the same property, the conditions are assumed to be either-or conditions. For example, if the conditions "Disk RPM=7200" and "Disk RPM=10000" are specified, the threshold applies to all disks with speed 7200 RPM or 10000 RPM. In case of conditions on different properties, the conditions are assumed to be "and" conditions. For example, if the conditions name = "Disk RPM", value = "10000" and name="Filer Model", value="FAS270" are specified, the threshold applies to all disks with speed 10000 RPM on FAS270 storage systems. During template modification, if no resource properties are specified, existing resource properties, if any, will be cleared.Fields
- name => string
The name of the resource property.
- value => string
The value that this resource property should have.
The id, in the DFM database, of a collection of per-client statistics. Range: [ 1 .. 2^31 - 1 ]Fields
A unique numeric identifier used to specify a threshold. This parameter is mandatory when modifying a threshold. This typedef is an alias for the builtin ZAPI type integer. Range: [1..2^31-1]Fields
Defines all the attributes of a threshold.Fields
- is-enabled => boolean, optional
Specifies if the Threshold is enabled. Default value is TRUE when setting or modifying a threshold.
- object-full-name => obj-full-name
Fully qualified name of the object. It is used as an output parameter while listing thresholds. This parameter is unused for all other APIs.
- perf-object-counter => perf-object-counter, optional
Specifies a combination of an object and a counter. When setting a threshold the first time, it is a mandatory parameter. For the modify APIs, it is ignored if threshold-id is also specified.
- threshold-id => threshold-id, optional
ID of the threshold. Element is ignored while creating a threshold but is mandatory when modifying them.
- threshold-interval => integer, optional
The amount of time in seconds for which an event generation is suppressed before deciding that a counter has crossed a specified threshold and an event needs to be generated. The same interval will also be used to generate a normal event. If a value of zero is specified, any temporary spikes in counter values above the threshold will generate an event, if the counter values are sampled at that point of time. This parameter is mandatory when setting a threshold. Range: [0..2^31-1]
- threshold-type => string, optional
The type of threshold, can be upper or lower. In case of an Upper threshold type, the counter value must exceed the threshold value for at least the specified amount of time for an event to be generated. In case of a Lower threshold type, the counter value must fall below the threshold value for at least the specified amount of time for an event to be generated. Note that as the threshold types are mutually exclusive, so only one threshold type may be specified. Default value is 'upper' when setting or modifying a threshold. Possible values:'upper', 'lower'
- threshold-unit => string, optional
Unit of the counter. This parameter is mandatory when setting or modifying a threshold. The Possible Values are: Possible values: per_sec, b_per_sec (bytes/s), kb_per_sec (Kbytes/s), mb_per_sec (Mbytes/s), percent, millisec, microsec, sec, or none.
Note that the metric used for a counter may vary depending upon the Data ONTAP version.
- threshold-value => integer, optional
The threshold value of a counter is used to generate an event if the observed counter value breaches this value. This parameter is mandatory when setting a threshold. Range: [1..2^64-1]
Defines a performance counter based threshold.Fields
- is-enabled => boolean, optional
If true, the threshold is enabled and events will be generated if it is breached. Default value is true.
- is-indirect => boolean, optional
Specifies whether the threshold applies on the object directly or through inheritance. For example, if the threshold is applied on a storage system, but applies on qtree in it because of the constituent counters, is-indirect will be set to true. This value is relevant only when the threshold details are returned; and is ignored when creating or modifying a threshold. It is, therefore, ignored in input.
- template-name => string, optional
Specifies the name of the template to which this threshold belongs.
- threshold-event-name => string
Specifies the name of the event that is raised when the threshold is breached. The default name is derived from the counter.
- threshold-id => threshold-id, optional
The ID, in the DFM database, of the threshold. This element should not be specified when creating a threshold, but is always returned when querying.
- threshold-interval => integer
The amount of time in seconds for which an event generation is suppressed before deciding that a counter has crossed a specified threshold and an event needs to be generated. The same interval will also be used to generate a normal event. If a value of zero is specified, any temporary spikes in counter values above the threshold will generate an event, if the counter values are sampled at that point of time. This parameter is mandatory when setting a threshold. Range: [0..2^31-1]
The description of a threshold template.Fields
- template-name => string
Name of the template.
Contains info on performance threshold templatesFields
- is-enabled => boolean, optional
This specifies if the template is enabled or not.
- member-thresholds => threshold-info2[], optional
Specifies the thresholds that are to be part of this template.
- template-description => string, optional
Description of the template.
- template-id => integer
This returns the id of the template that is created Range: [1..2^32-1]
- template-name => string, optional
Name of the template.
Filter data based on selected time range.Fields
Holds the 'from' and 'to' range as minutes in a day. Ex: 9.25AM to 4.45PM will become from=565 and to=1005Fields
- from => integer
Start time in minute of day. Ex: 9.25AM corresponds to (9*60)+25 => 565 Range: [0..1439]
- to => integer
End time in minute of day. Ex: 4.45PM corresponds to (16*60)+45 => 1005 Range: [0..1439]
Type of the view. Possible values are: - custom_view - User defined view containing counters of specific instances. Creates a custom counter group to hold the performance data.
- canned_view - Default views.
- summary_view - Default views.
- custom_view_instances - User defined view containing counters of specific instances. Doesn't create custom counter group during view creation but uses data from default counter group.
- custom_view_object_type - User defined view that is attached to a specific associated object type. Exactly one assoc-obj-type element needs to be specified while creating this view. Counters for specific instances cannot be present in the view. Only counters related to the associated object type can be added to it. Doesn't create custom counter group during view creation but uses data from default counter group.
Fields
A set of advanced options to be set on the provisioned storage containers.Fields
- option-name => string
Name of the advanced option
- option-value => string
Value of the advaced option
Post provisioning script settings, applicable only for "nas" or "san" type of provisioning policies.Fields
- script-path => string, optional
Full path of a script on the management station to perform certain custom steps. The script will be run as the final step in the the provisioning process.
Full and nearly full thresholds for generating events on used space of dataset members.Fields
Details of capacity settings like space guarantees, quotas, out of space actions when provisioning storage for NAS access.Fields
- auto-grow-capacity => boolean, optional
If true, auto grow capacity on the provisioned volume to a pre-configured maximum size. Default is false. This option is automatically enabled if space-on-demand is set to true.
- default-group-quota => integer, optional
Default group quota setting on the dataset members. The value is expressed in kilobytes. Range: [1..2^63-1]
- default-user-quota => integer, optional
Default user quota setting on the dataset members. The value is expressed in kilobytes. Range: [1..2^63-1]
- space-on-demand => boolean, optional
If True, volume is provisioned with autogrow and autodelete settings enabled. volume options : try_first=volume_grow. autodelete options : commitment=disrupt. This is deprecated in favor of individual elements auto-grow-capacity and auto-delete-snapshots. If set to true, both those elements are considered as true.
- thin-provision => boolean, optional
If True, the requested space is not pre-allocated (or reserved) from aggregates. The space from aggregates is actually consumed when users write data to CIFS shares or NFS exports of dataset members.
Contains all information about a single provisioning policy.Fields
- dedupe-enabled => boolean, optional
Specifies whether dedupe has to be enabled on all the volumes of the dataset with which the provisioning policy is associated. Default value is false.
- dedupe-schedule => string, optional
Specifies the schedule for deduplication that has to be set on the volumes of the dataset with which the provisioning policy is associated. Valid values are "none", "auto" and the format hour_list@day_list.
The 'hour_list' specifies which hours of the day the dedupe operation should run on each scheduled day. Hour ranges such as 8-17 are allowed. Step values can be used in conjunction with ranges. For example, 0-23/2 means "every two hours".
The 'day_list' specifies which days of the week the dedupe operation should run. It is a comma-separated list of the first three letters of the day: sun, mon, tue, wed, thu, fri, sat. The names are not case sensitive. Day ranges such as mon-fri can also be given.
Default value is "none".
- is-policy-readonly => boolean, optional
Specifies whether the policy is a golden copy of the canned provisioning policy.
- provisioning-policy-description => string, optional
Description of the policy. It may contain from 0 to 255 characters. If the length is greater than 255 characters, the ZAPI fails with error code EINVALIDINPUT. The default value is the empty string "".
- provisioning-policy-id => obj-id, optional
Object ID of the policy, ignored while creating policy.
- provisioning-policy-name => obj-name
Name of the policy. Each provisioning policy has a name that is unique among provisioning and data protection policies. Should be provided while creating a new policy.
- provisioning-policy-type => provisioning-policy-type, optional
Type of provisioning policy, valid types are "nas", "san" and "secondary".
- resource-tag => string, optional
A label associated with a resource pool or its elements (filer/aggregates). A label serves as a filter when selecting resources, during provisioning,i.e only those resources that have a macthing tag are considered for provisioning.
For example: An adminitrator can label resource pools as "low-cost" and create a provisioning policy with resource-tag set to "low-cost" to place datasets on cheap storage. It may contain from 0 to 255 characters.
- storage-reliability => storage-reliability, optional
Specifies desired level of storage reliability required for the dataset. Provisioning Manager will try find a resource which is configured to provide desired reliability and provision FlexVols on it, In case such a resource is not available the provisioning request will fail.
type of provisioning policy. Possible values are "san", "nas" and "secondary".Fields
Space settings when provisioning storage for san access, specify how space will be allocated to various components in san environment.Fields
- guarantee-writes => boolean, optional
Applicable when over committing storage space. i.e thin-provision is set to "true".
If set to "true", Provisioning Manager creates Volume/LUN configuration such that space for writes to LUNs is always guaranteed but Snapshot copies may or may not be possible depending on space availability, the choice of configuration can be specified in thin-provisioning-configuration input element.
If set to "false", Provisioning Manager creates Volumes/LUNs with no space guarantees at all. As a result the writes to Volumes/LUNs provisioned using such policies can fail at given time.
- storage-container-type => storage-container-type, optional
Container type to provision. Possible values: "volume","lun".
Provisioning Manager can provision LUNs and map them to server to provide access to the LUNs or provision FlexVols and delegate the task of provisioning LUNs to server administrators using server side tools liks SnapDrive. Default value is "lun".
- thin-provision => boolean, optional
Enable over-commitment of storage space.
- thin-provisioning-configuration => string, optional
Specifies the configuration to use when thin-provision and guarantee-writes are set to "true". Each configuration has advantages and space saving objectives.
Possible values are snapshots, data_and_snapshots.
snapshots: This configurations allows thin provisioning of snapshot space, i.e space is allocated for Snapshot copies as needed but space for data and overwrites (after the first Snapshot copy) is preallocated. When the aggregate runs out of space further Snapshot copies may fail, but existing Snapshot copies are preserved.
data_and_snapshots: This configurations allows thin provisioning of space for data overwrites and snapshot space, The space requested initially is allocated upfront but space for data overwrites(after first Snapshot copy) and Snapshot copies is allocated as needed. In situations when aggregate on which volume is placed runs out of space, Snapshot copies in oldest first order will be automatically deleted to allocate space for user writes.
the value is set to "none" when either thin-provision or gurantee-writes is "false". Changing it to another value will result in error with return code set to EINVALIDINPUT.
Storage container type to provision as dataset members when provisioning storage for SAN or NAS access. Possible values are "lun" and "volume" and "qtree".
If the value is "lun", Provisioning Manager will provision LUNs into the dataset and map to the hosts specified by creating igroups. This is applicable only when provisioning for SAN access.
If the value is "volume", Provisioning Manager will create FlexVols, the LUNs will then be created using external tools like SnapDrive, or in case of NAS, Provisioning Manager will export the volume to be accessed by clients.
If the value is "qtree", Provisioning Manager will provision qtrees for every provisioning in case of NAS access.Fields
NetApp Storage Systems offer a wide range of storage availability features which provide protection against various failure like disk drive failures, shelf failures, controller failures or site failures. The required level of reliability of the dataset can be specified in the provisioning policy.Fields
- controller-failure => boolean, optional
Resiliency against single controller failure, i.e the dataset can be accessed even after a controller fails. Maps to active/active pair. Default value is "false".
- disk-failure => string, optional
Resiliency against disk failures, possible values are "single" (Maps to RAID-4 aggregates), "double" (Maps to RAID-DP aggregates) or "any"(either RAID-4 or RAID-DP). Default value is "double".
- sub-system-failure => boolean, optional
Resiliency against storage sub system failures (or back end failures) like disk shelf, disk shelf adapters, failure of FCAL loops. Maps to "SyncMirror" aggregates. Default value is "false".
Information about a Qtree.Fields
- dataset-id => integer, optional
ID of dataset where this qtree is in the primary storage set. Range: [1..2^31-1] This field is deprecated in favor of datasets, which lists all datasets the qtree belongs to. It is still populated with one of the dataset ids.
- datasets => dataset-reference[]
List of dataset IDs where this qtree is in the primary storage set. If is-in-dataset is false, this list will be empty.
- filer-id => integer
Identifier of storage system on which the qtree resides. Always present in the output. Range: [1..2^31-1]
- filer-name => string
Name of storage system on which the qtree resides. Always present in the output.
- is-available => boolean, optional
True if this object and all of it's parents are up or online. Only output if the call to iter-start included the "include-is-available" flag.
- is-direct-vfiler-child => boolean, optional
Indicates whether the qtree is a direct or indirect child of a Vfiler. If not present, then qtree is not under a Vfiler.
- is-dp-ignored => boolean
Indicates whether an admin has decided to ignore this qtree for purposes of data protection.
- is-in-dataset => boolean
Indicates whether the qtree is a member of any dataset.
- qtree-name => string
Simple name of the qtree. Always present in the output. The qtree name is a simple name without the storage system or volume part. For example, myqtree2.
- volume-id => integer
Identifier of volume in which the qtree resides. Always present in the output. Range: [1..2^31-1]
- volume-name => string
Name of volume in which the qtree resides. Always present in the output.
Sizes of various parameters of a qtree. File counts are simple counts.Fields
Details of a administrator.Fields
Details of an aggregate name or id. When used as input only one of aggregate-name or aggregate-id is specified. When used as output, both of them will be returned.Fields
Details of an aggregate resource. aggregate-resource-name-or-id must be specified. If aggregate-name is specified, then need to also specify filer-identifier.Fields
Details of a DFM dataset resource. DFM dataset name or object id of a DFM dataset. When used as input only one of dataset-name or dataset-id is specified. When used as output, both of them will be returned.Fields
- dataset-id => integer, optional
Object id of a DFM dataset. Range: [0..2^32-1]
- dataset-name => string, optional
DFM dataset name
Details of a storage system resource. When used as input only one of filer-name or filer-id is specified. When used as output, both of them will be returned.Fields
Details of a DFM group resource. DFM group name or object id of a DFM group. When used as input only one of group-name or group-id is specified. When used as output, both of them will be returned.Fields
- group-id => integer, optional
Object id of a DFM group. Range: [0..2^32-1]
- group-name => string, optional
DFM group name
Identifies a host resource. When used as input only one of host-name orhost-id is specified. When used as output, both of them will be returnedFields
- host-name => string, optional
A FQDN of a host
Details of a LUN name or id. When used as input only one of lun-name or lun-id is specified. When used as as output both will be returned. If a lun-name is specified, then either volume-identifier or host-identifier must also be specified.Fields
- lun-name => string, optional
The serial number or path name of a LUN. Path name of LUN is written as volume-name/lun-name or volume-name/qtree-name/lun-name. One of either volume-identifier or host-identifier must also be specified. However, if a host-identifier specified, the lun-name must be only a serial number.
Details of a LUN. lun-identifier-name-or-id must be specified. If lun-name is specified, then either volume-identifier or host-identifier must also be specified. See the description for lun-name for more information.Fields
Identifies a policy resource. When used as input one or more of policy-name or policy-id is specified. When used as output, both of them will be returned. Policy can refer to either a protection policy, a provisioning policy or an application policy.Fields
A qtree name or id. When used as input only one of qtree-name or qtree-id is specified. When used as output, both of them will be returned. If qtree-name is specified, then either host-identifier or volume-identifier must also be specified but not both.Fields
- qtree-id => obj-id, optional
The object id of a volume. Range: [0..2^32-1]
- qtree-name => obj-name, optional
The name of a qtree. Also need either host-identifier or volume-identifier.
Details of a qtree. qtree-identifier-name-or-id must be specified. If qtree-name is specified, then either volume-identifier or host-identifier must also be specified. See the description for qtree-name for more information.Fields
- host-identifier => host-resource, optional
Host in which the qtree resides.
- volume-identifier => volume-resource, optional
Volume in which the qtree resides.
an admin name or object idFields
- admin-name => string
The name of an administrator
An admin or usergroup. When used as an input element, specify only one of admin-or-usergroup-name or admin-or-usergroup-id (not both). When used as an output element, both of them are returned.Fields
- admin-or-usergroup-name => string, optional
An admin or usergroup name. The format of the admin name consists of a sequence of one or more characters up to a maximum of 255 characters. The usergroup refers to an existing usergroup in Microsoft's Active Directory. The format of the usergroup name is DOMAIN\USER. For example, "ABC\eng"
An operationFields
- operation-name => string
Name of a operation. The maximum length allowed is 255 characters. It is of the form: .. For example: "DFM.SRM.Read"
more details of an operationFields
- operation-description => string
A longer (multiple line) description suitable for completely explaining the operation and the places where it has an effect. The maximum length allowed is 255 characters.
- operation-synopsis => string
A short description (only a few words) suitable for use in a user interface when showing/ selecting this operation. The maximum length. Allowed is 255 characters.
- resource-type => string
Type of resource that the operation applies to. Possible values: "managementstation", "filer", "aggregate", "volume", "lun", "vfiler", "host", "group", "rbac_role", "dataset", "resource_pool". Note that group refers to a DFM resource group.
operation assigned to a given resourceFields
Identifies an RBAC role resource. When used as input only one of rbac-role-name or rbac-role-id is specified. When used as output, both of them will be returnedFields
Identifies a resource. Only one resource field must be set. i.e. one of resource-id, rbac-role, host, group, storage system, vfiler, aggregate, volume, resource-pool, dataset, qtree, protection policy, provisioning policy, lun or vFiler template. When an object id is specified, it refers to the object id field in the objects table from the DFM database.Fields
Details of a DFM resource-pool resource. DFM resource-pool name or object id of a DFM resource-pool. When used as input only one of resource-pool-name or resource-pool-id is specified. When used as output, both of them will be returned.Fields
The attributes of a role: role name and id, inherited roles, capabilities and operations.Fields
Identifies a storage service resource. When used as input, only one of storage-service-name or storage-service-id is specified. When used as output, both of them will be returned.Fields
- storage-service-id => obj-id, optional
A Storage service identifier.
- storage-service-name => obj-name, optional
A Storage service name.
Details of a vfiler resource. When used as input only one of vfiler-name-or-uuid or vfiler-id is specified. When used as output, both of them will be returned.Fields
Identifies a vfiler template resource. When used as input only one of vfiler-template-name or vfiler-template-id is specified. When used as output, both of them will be returned.Fields
A volume name or id. When used as input only one of volume-name or volume-id is specified. When used as output, both of them will be returned. If volume-name is specified, then either host-identifier, vfiler-identifier or aggregate-identifier must also be specified but not both.Fields
Details of a volume.Fields
- host-identifier => host-resource, optional
Host on which the volume resides.
Contains the details of the categoriesFields
- category-list => category-list-info[], optional
contains the list of sub-categories under a category.
- category-name => string
The name of the category. Max Length: 64 chars
Describes the meta data of a graphFields
- deprecated-by => string, optional
Name of the graph that replaces this graph. Will be absent for non-deprecated graphs. Only returned via the graph-list-info-iter-next api.
- description => string, optional
Description of the graph. Maximum length of 1024 characters. Only returned via the report-graph-list-info-iter-next api.
- graph-name => string
Unique name of the graph. Maximum length of 64 characters.
- is-default => boolean, optional
Specifies whether the graph is the default graph for the report. Only returned via the report-graph-list-info-iter-next api.
Describes the meta data of a line in a graph.Fields
- sample-format => string
The format in which the sample value is returned. Possible values: Range of the sample value if integer: [0..(2^63) - 1] Range of the sample value if float: [0..(2^63) - 1]
- sample-name => string
Name of the sample. Maximum length: 64 characters.
- sample-suffix => string
The unit of the sample. If the unit is not known then 'none' is returned. Possible values: - 'bytes'
- 'percentage'
- 'minutes'
- 'none'
Specifies the application to which a report belongs. This typedef is an alias for the built-in ZAPI type string. Possible values: - 'control_center'
- 'backup'
- 'disaster_recovery'
Fields
Describes the name of the report.Fields
- deprecated-by => string, optional
Name of the report that replaces this report. Will be absent for non-deprecated reports.
- description => string, optional
Description of the report. Max Length: 1024 chars
- display-tab => string
The tab where a report is displayed in DFM UI. Possible values: - 'aggregates'
- 'appliances'
- 'data_sets'
- 'events'
- 'filesrm'
- 'filesystems'
- 'luns'
- 'resource_pools'
- 'sans'
- 'scripts'
- 'vfilers'
- 'others'
- is-deprecated => boolean, optional
true if the report is deprecated.
- rbac-operation => string
The name of the RBAC operation required to view the report. Use rbac-operation-info-list to get the list of currently valid operations. Max Length: 255 chars
- report-name => string
Unique name of the report. Max Length: 64 chars
Contains the details of the reports.Fields
- category-id => integer, optional
Unique identifier of category. The category-id is not returned for reports which do not belong to any category. Range: [1..2^31-1]
- report-design-file-name => string
This contains the name of the rptdesign(design file) of the report. rptdesign is a file format in BIRT. The report design is saved in the rptdesign file. Max Length: 128 chars
- report-name => string
This contains the unique cli name of the report. Max Length: 64 chars
- report-pretty-name => string
This contains the pretty name of the report displayed in UI. Max Length: 64 chars
The type of output format. This typedef the built-in ZAPI type string. Possible values: - 'csv'
- 'html'
- 'paragraph'
- 'perl'
- 'text'
- 'xls'
- 'xml'
Fields
Describes the contents of an output.Fields
- graph-name => string, optional
Name of the graph.
- report-application => report-application
Specifies the application to which a report belongs. Default is 'control_center'.
- report-id => integer, optional
specifies the id of the report in case of custom reports. Range: [1..(2^31)-1]
- report-name => string
Name of the report.
- report-output-id => integer
Specifies the output id. Range: [1..(2^31)-1]
- report-schedule-id => obj-id
Specifies the id of the report schedule. Range: [1..(2^31)-1]
- run-by => string
Specifies the name of the user who generated the report output. Maximum length of 255 characters.
- run-status => string
Specifies the status of the report output. Possible values: - 'pending'
- 'running'
- 'succeeded'
- 'failed'
- 'aborted'
- run-time => integer, optional
Specifies the timestamp when the output was saved. Not present if the report output is in pending, running or aborted state. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
Possible values:"custom" returns only custom and modified_canned reports. "canned" returns only canned reports. If not specified all reports are returned.Fields
Describes the contents of a report schedule.Fields
- email-address-list => email-address-list, optional
Email addresses of recipients for a mail to be sent with an attachment when the report is generated.
- graph-name => string, optional
Name of the graph. Maximum length of 64 characters.
- is-enabled => boolean, optional
Specifies whether the state of the report schedule is enabled.
- is-successful => boolean, optional
Specifies whether the last run result was successful. This element is not present if the report schedule has not run even once.
- last-run-time => integer, optional
Specifies the timestamp when the report was last run. This element is not present if the report schedule has not run even once. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- report-application => report-application
Specifies the application to which a report belongs. Default is 'control_center'.
- report-id => integer, optional
Specifies the id of the custom report. Range: [1..(2^31)-1]
- report-name => string
Specifies the name of the report.
- report-schedule-id => obj-id
Specifies the schedule id. Range: [1..(2^31)-1]
- report-schedule-name => obj-name
Specifies the name of the report schedule.
- schedule-id => integer
Specifies the id of the schedule. Range: [1..(2^31)-1]
- target-object-id => integer
Specifies the id of the object on which the report schedule is scheduled to run. Range: [1..(2^31)-1]
- target-object-name => string
Specifies the name of the object on which the report schedule is scheduled to run.
Detailed contents of a report schedule.Fields
- email-address-list => email-address-list, optional
Email addresses of recipients for a mail to be sent with an attachment when the report is generated.
- graph-name => string, optional
Name of the graph. The graph name is validated as follows: - It should belong to the report name specified. - It should be applicable to the target object. - The license should support the graph. Maximum length of 64 characters.
- include-deleted-objects => boolean, optional
Include deleted objects in the report generated. Default value is FALSE.
- report-application => report-application, optional
Specifies the application to which a report belongs. Default is 'control_center'.
- report-name-or-id => string, optional
Name of the report. For custom reports it can either be name or id. Required for report-schedule-add API. Range for id: [1..(2^31)-1]
- schedule-name-or-id => obj-name-or-id, optional
Name or id of a schedule. Required for report-schedule-add API. Range for id: [1..(2^31)-1]
- target-object-name-or-id => obj-name-or-id, optional
Specifies the target object on which the report has to be scheduled. The object type depends on the name of the report specified. For ex: controller-details report can run on a target object which is of type host. Range for id: [1..(2^31)-1]
Information about a datset consuming space from the aggregate.Fields
- dataset-id => obj-id
Database identifier of the dataset.
- dataset-name => obj-name
Name of the dataset.
- dp-node-id => integer
Identifier of the node in the dataset that is consuming space from the aggregate. If there is no protection policy associated with the dataset, node-id will be 1. Range: [1..2^32-1]
- dp-node-name => string
Name of the node in the dataset that is consuming space from the aggregate. If there is no protection policy associated with the dataset, this would return the name of the dataset.
- is-capable-of-migration => boolean
Indicates whether the dataset is capable of migration. This is returned true only if the dataset can be migrated in a transparent way using vFiler migration.
- is-dp-node-effective-primary => boolean
Specifies whether the node in the dataset that is consuming space from the aggregate is the effective primary node or not.
- used-space => integer
Space used by the dataset on the aggregate in bytes. Range: [0..2^64-1].
Label for the resource. Can be of maximum 255 characters.Fields
The default values of the attributes defined by this ZAPI.Fields
- resourcepool-full-threshold => integer, optional
The value (as an integer percentage) of the fullness threshold used to generate a "resource pool full" event for this resource pool. Setting threshold percentages higher than 100% allows an administrator to set it in such a way so that it never trigger these events. Range: [0..1000]
- resourcepool-nearly-full-threshold => integer, optional
The value (as an integer percentage) of the fullness threshold used to generate a "resource pool nearly full" event for this resource pool. Setting threshold percentages higher than 100% allows an administrator to set it in such a way so that it never trigger these events. Range: [0..1000]
Information about a resource pool.Fields
- aggregate-nearly-overcommitted-threshold => integer, optional
The value (as an integer percentage) of the fullness threshold used to generate an "resourcepool nearly overcommitted" event for all the resource pools. If the value specified is empty, then the setting is cleared and the value specified in global options for aggrNearlyOvercommittedThreshold will be used. Range: [0..65535]
- aggregate-overcommitted-threshold => integer, optional
The value (as an integer percentage) of the fullness threshold used to generate an "resourcepool overcommitted" event for all the resource pools. If the value specified is empty, then the setting is cleared and the value specified in global options for aggrOvercommittedThreshold will be used. Range: [0..65535]
- resource-tag => resource-tag, optional
A label that can be associated with a resource pool.
- resourcepool-free-vfilers => integer, optional
The number of vFiler units that can be created in this resource pool based on the current limit and number of vFiler units present in the storage systems that are members of the resource pool. Ignored for resourcepool-create and resourcepool-modify. Range: [0..2^64-1].
- resourcepool-full-threshold => integer, optional
The value (as an integer percentage) of the fullness threshold used to generate a "resource pool full" event for this resource pool. If the value specified is empty, then the resource pool setting is cleared and the value specified in resourcepool-get-defaults is used. Range: [0..1000]
- resourcepool-id => integer, optional
Identifier of the resource pool. Its a valid DFM object id. A valid DFM object id would be in the range of [1..2^31-1]. Ignored for resourcepool-create and resourcepool-modify.
- resourcepool-is-provisionable => boolean, optional
This is returned in resourcepool-list-info-iter-next if license-filter, resource-tag or reliability information is passed in the input, this is set to true or false depending on whether the resource pool meets all the three filters, Default value is set to TRUE.
- resourcepool-kbytes-total => integer, optional
Total amount of Kbytes of storage in this resource pool. This value only present if include-free-space was true. Ignored for resourcepool-create and resourcepool-modify. Range: [0..2^63-1].
- resourcepool-kbytes-used => integer, optional
Amount of Kbytes of storage used in this resource pool. This value only present if include-free-space was true. Ignored for resourcepool-create and resourcepool-modify. Range: [0..2^63-1].
- resourcepool-member-count => integer, optional
The count of direct members added to the resource pool. This includes number of storage systems and aggregates added to the resource pool. Ignored for resourcepool-create and resourcepool-modify. Range: [0..2^31-1]
- resourcepool-name => string, optional
Name of the resource pool. This is modifiable. This must be specified in resourcepool-create. Its a valid DFM object name. A valid DFM object name would contain at least one non-numeric character.
- resourcepool-nearly-full-threshold => integer, optional
The value (as an integer percentage) of the fullness threshold used to generate a "resource pool nearly full" event for this resource pool. If the value specified is empty, then the resource pool setting is cleared and the value specified in resourcepool-get-defaults is used. Range: [0..1000]
- resourcepool-space-status => string, optional
Space status of the resource pool. Possible values are: - ok : Sum of used space of all aggregates in the resource pool against the sum of total size of all aggregates has not reached or crossed resourcepool-nearly-full-threshold and none of the member aggregates have reached or crossed their full thresholds.
- member_nearly_full : Atleast one of the member aggregates of the resource pool has reached or crossed its nearly full threshold.
- nearly_full : Sum of used space of all aggregates in the resource pool against the sum of total size of all aggregates has reached or crossed resourcepool-nearly-full-threshold.
- member_full : Atleast one of the member aggregates of the resource pool has reached or crossed its full threshold.
- full : Sum of used space of all aggregates in the resource pool against the sum of total size of all aggregates has reached or crossed resourcepool-full-threshold.
resourcepool-space-status is returned only if include-free-space is true in resourcepool-list-info-iter-start .
- resourcepool-timezone => string, optional
Time zone in which this pool's storage is located. This is modifiable. An empty value means to use the system default (usually GMT). Currently valid time zones can be listed by timezone-list-info-iter.
Information about one member of a resource pool.Fields
- datasets-space-info => dataset-space-info[], optional
Space used by each dataset in the aggregate. Returned only when member-type is 'aggregate' and if include-dataset-space-info is set to true in resourcepool-member-list-info-iter-start.
- member-id => integer
Identifier of the member. Its a valid DFM object id. A valid DFM object id would be in the range of [1..2^31-1].
- member-name => string
Display name of the member. Its a valid DFM object name. A valid DFM object name would contain at least one non-numeric character.
- member-space-status => object-space-status, optional
Space status of the resource pool member. This indicates the fullness of the aggregate in terms of whether the percentage of used space with respect to total size of the aggregate has reached or crossed the fullness thresholds. This value is returned only when member-type is 'aggregate'.
- member-status => string
Current status of the member. Possible values are 'normal', 'information', 'unknown', 'warning', 'error', 'critical', 'emergency'.
- member-type => string
Type of the member. Possible values are 'filer' or 'aggregate'.
- provisioning-checks-passed => boolean, optional
Set to true, if all the member passes all the provisioning related checks for given provisioning policy and provisioning request. Applicable only when run-provisioning-checks is set to true at the start of iteration.
- resource-tag => resource-tag, optional
A label for a resource pool member.
Information of a dataset backup.Fields
- backup-id => integer
Identifier of the backup instance. The management station assigns a unique id to each backup instance. Range: [1..2^31-1]
- backup-version => dp-timestamp
Timestamp when the backup was taken. Backups of same dataset at different locations have same version if their contents are identical. The management station keeps track of which backups have identical contents and assigns same version to them. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC.
- dataset-id => obj-id
Id of the dataset to which this backup instance is associated with.
- dataset-name => obj-name
Name of the dataset to which this backup instance is associated with.
- dp-backup-expiry-timestamp => dp-timestamp, optional
Time when this backup instance will expire, at which point Protection Manager will delete this backup. The timestamp value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. This will not be returned when the retention type for the backup is unlimited.
- dp-node-id => integer
Id of the dataset node which holds the backup instance.
- dp-node-name => string
Name of the dataset node which holds the backup instance.
Information of one Snapshot copy. Either unique-id or name of the Snapshot copy should be specified.Fields
- snapshot-name => string, optional
Name of the Snapshot copy. If unique-id is specified, then this element is ignored.
- unique-id => string, optional
Unique identifier of the Snapshot copy. Currently, this is the Snapshot copy's creation time.
Application dependent on this Snapshot copy. Possible values: "snapmirror", "snapvault", "dump", "volume_clone", "lun_clone", "snaplock", "acs".Fields
Information on one particular Snapshot copy.Fields
- backups => backup-info[], optional
List of dataset backups which contain this snapshot. The list is empty if the snapshot is not part of any dataset backup. The element is present only if snapshot-list-info-iter-start was called with include-backup-info as true.
- creation-timestamp => dp-timestamp
The volume access time when the Snapshot copy was created in seconds since January 1, 1970. This value will not change even if the Snapshot copy is accessed.
- is-busy => boolean
True if the Snapshot copy is being used by an application. If "is-busy" is true, then the Snapshot copy cannot be deleted.
- snapshot-name => string
Name of the Snapshot copy.
- unique-id => string
Unique identifier of the Snapshot copy. Currently, this is the Snapshot copy's creation time.
- volume-id => obj-id
Idenitifier of the volume to which this Snapshot copy belongs to.
- volume-name => obj-full-name
Name of the volume to which this Snapshot copy belongs to.
Information of one SRM file type.Fields
- srm-file-type-id => obj-id
A unique identifier representing SRM file type in dfm.
- srm-file-type-name => string
SRM file type
Resource Pool. Value can be a DFM object name (maximum 255 characters) or the DFM id [1..2^31-1].Fields
nullFields
- dataset-id => obj-id
Identifier of the dataset.
- dataset-name => obj-name
Name of the dataset.
- storage-service-id => obj-id, optional
Name of the storage service. This will not be returned if the dataset is not associated with the storage service.
- storage-service-name => obj-name, optional
Name of the storage service. This will not be returned if the dataset is not associated with the storage service.
nullFields
- is-dr-capable => boolean, optional
True, if storage service has data protection policy associated with it, which is disaster recovery capable.
- protection-policy-id => obj-id, optional
Identifier of the protection policy
- protection-policy-name => obj-name, optional
Name of the protection policy.
- storage-service-contact => email-address-list
Contact for the storage service, such as the owner's e-mail address.
- storage-service-description => string
Description of the new storage service.
- storage-service-id => obj-id
Identifier of storage service.
- storage-service-name => obj-name
Name of the storage service.
nullFields
- dp-node-name => dp-policy-node-name, optional
Name of the node in the data protection policy. dp-node-name must match exactly the name of one of the nodes in the data protection policy that is currently assigned to the storage service. This element can be absent if no protection policy was assigned to the storage service, in which case, the details are assumed for the primary node.
- old-dp-node-name => dp-policy-node-name, optional
Name of the old data protection policy node to be mapped to the new data protection policy node. When the data protection policy of storage service is modified and this element is provided, the node dp-node-name of the new policy will acquire the attributes of this node. Also for all the datasets attached to the storage service, storage sets mapped to old-dp-node-name of the old protection policy will be re-mapped to node dp-node-name of the new protection policy. If not provided when data protection policy is being modified, for all the datasets attached to the storage service, new storage sets would be created for node dp-node-name. This element is only used by storage-service-modify and storage-service-dataset-modify API for 'attach' operation type. This input is ignored in storage-service-create API
- provisioning-policy-name-or-id => obj-name-or-id, optional
Name or identifier of the provisioning policy to be associated with the node. This element is used only by storage-service-create and storage-service-modify API. This element will be ignored, if specified for storage-service-dataset-modify API. In storage-service-modify API, if the value specified for the element is empty, then the provisioning policy associated with the node will be cleared.
- resourcepools => resourcepool-name-or-id[], optional
Name or identifier of the resource pools to be associated with the node. This element is used only by storage-service-create and storage-service-modify API. This element will be ignored, if specified for storage-service-dataset-modify API.
- vfiler-template-name-or-id => obj-name-or-id, optional
Name or identifier of the vFiler template to be used to create a vFiler unit for dataset node. This element is used only by storage-service-create and storage-service-modify API. This element will be ignored, if specified for storage-service-dataset-modify API. In storage-service-modify API, if the value specified for the element is empty, then the vFiler template associated with the node will be cleared.
nullFields
- dp-node-id => obj-id
ID of the node in the data protection policy that maps to the storage service node.
- provisioning-policy-id => obj-id, optional
Identifier of the provisioning policy associated with the node.
- provisioning-policy-name => obj-name, optional
Name of the provisioning policy associated with the node.
- resourcepools => resourcepool-info[], optional
List of the resource pools associated with the node.
- vfiler-template-id => obj-id, optional
Identifier of the vFiler template associated with the node.
- vfiler-template-name => obj-name, optional
Name of the vFiler template associated with the node.
nullFields
- dataset-access-details => dataset-access-details, optional
Details of the vFiler unit to be created through which the dataset members provisioned for this node will be exported. Both server-name-or-id and dataset-access-details cannot be specified for a node.
- dataset-export-info => dataset-export-info, optional
Specifies the NAS or SAN export settings for members provisioned in this node of the dataset
- dp-node-name => dp-policy-node-name, optional
Name of the node in the data protection policy. dp-node-name must match exactly the name of one of the nodes in the data protection policy that is currently assigned to the storage service. This element can be absent if no protection policy was assigned to the storage service, in which case, the details are assumed for the root storage set.
- server-name-or-id => obj-name-or-id, optional
Name or identifier of the vFiler unit to be attached to the node. If a vFiler unit is attached, then all members provisioned into this node will be exported over this vFiler unit. Both server-name-or-id and dataset-access-details cannot be specified for a node.
- timezone-name => string, optional
Timezone to assign to the node. If specified, the value must be a timezone-name returned by timezone-list-info-iter-next. If no timezone is assigned, then the default system timezone will be used.
The default time zone settings.Fields
- default-timezone => string
Name of the time zone to use for objects that do not specify a time zone. This may be empty, meaning that objects use the server's time zone by default. Set this value with the "dfm option set timezone={new-value}" command.
- server-timezone-description => string
Description of the time zone in which this DFM server is located. If the "default-timezone" is empty, objects without an explicit time zone setting will use the server's time zone. Example values: - "Pacific Standard Time (GMT -8:00)"
- "US/Eastern (GMT -4:00)"
- "SGT (GMT +6:00)"
- "UTC (GMT +0:00)"
Information about a time zone.Fields
Report count of users.Fields
- count => integer
Number of reports of a user.
- user-name => string
Name of the user.
vFiler IP Address to interface binding information.Fields
- interface => obj-name-or-id
Name or identifier of the interface to bind the vFiler IP Address to. This can be either a physical interface, VIF or a VLAN. Example: "e0a", "myvif", "service_vlan"
- ip-address => ip-address
IP address of the vFiler which is bound to the interface. If this IP Address is not already added to the vFiler, then the IP Address is first added to the vFiler before binding the IP Address with the given interface.
- mtu-size => integer, optional
MTU size of the new VLAN interface. This element should be present only if a new VLAN interface is being created during vFiler setup. Also the version of Data ONTAP on the storage system on which the vFiler is being setup, should be 7.3.3 or later. Range: [296..9196]
- netmask => string, optional
Netmask for the IP address in dotted decimal notation. For IPv4 address, either netmask or prefix-length can be supplied. For IPv6 address, netmask is ignored.
- partner-interface => obj-name-or-id, optional
Name or identifier of the interface on the partner node. This element is valid when the storage system on which this vFiler unit is present is in a Active/Active configuration. If a new VLAN is getting created on the local node, then a new corresponding VLAN will also be created on the partner node on this interface.
- prefix-length => integer, optional
Prefix length for the IP address. This is required if IPv6 address is supplied. For IPv4 address, either netmask or prefix-length can be supplied. Range: [1..127]
- vlan-identifier => integer, optional
Identifier for creating a new VLAN interface. If this element is present, the interface element should refer to a physical VLAN tagged interface. Range: [1..4094]
Name of the protocol. Possible values: "nfs", "cifs", "iscsi".Fields
Information about a vFiler Template.Fields
- cifs-auth-type => string, optional
CIFS authentication mode to be used for the CIFS setup of a vFiler. This determines the method by which clients will be authenticated when connecting to the CIFS service on the vFiler. Possible values: "active_directory", "workgroup". Default value: "workgroup"
- cifs-domain => string, optional
Active Directory domain to which the vFiler will join to. This can be the NetBIOS or fully qualified domain name. Examples: cifsdomain, cifs.domain.com This field is applicable only when cifs-auth-type is set to "active-directory" and is ignored otherwise.
- cifs-security-style => string, optional
The security style determines whether or not the CIFS service on vFiler will support multiprotocol access. Possible values: "ntfs", "multiprotocol". Default value is: "multiprotocol"
- description => string, optional
Description of vFiler template. By default, this field is empty.
- vfiler-template-id => obj-id, optional
Identifier of the vFiler template. This is ignored in vfiler-template-create.
- vfiler-template-name => obj-name, optional
Name of the vFiler template. This must be specified in vfiler-template-create.
Information about a Data Center.Fields
- datasets => dataset-reference[]
List of the datasets that the data center belongs to.
- deleted-by => string, optional
The user who deleted the Data Center. This element is present only if the Data Center is deleted and include-deleted is passed as true when starting the iteration.
- deleted-timestamp => dp-timestamp, optional
The time and date when the Data Center was marked as deleted in DataFabric Manager. This element is present only if the Data Center is deleted and include-deleted is passed as true when starting the iteration.
- host-service-id => obj-id
Object id of the Host Service that manages the Data Center.
- is-protected => boolean
true, if Data Center is protected.
Information about a hypervisor.Fields
- datacenter-id => obj-id
Object Identifier of the Data Center.
- datacenter-name => obj-name
Name of the Data Center.
Information about a Datastore.Fields
- deleted-by => string, optional
The user who deleted marked the Datastore as deleted. This element is present only if the Datastore is deleted and include-deleted is passed as true when starting the iteration.
- deleted-timestamp => dp-timestamp, optional
The time and date when Datastore was marked as deleted in DataFabric Manager server. This element is present only if the Datastore is deleted and include-deleted is passed as true when starting the iteration.
- host-service-id => obj-id
Id for the Host Service that manages the Datastore.
- is-protected => boolean
true, if Datastore is protected.
Name and object identifier of the datastore object.Fields
Information about a Hypervisor.Fields
- datacenter-reference => datacenter-reference, optional
Information of Data Center to which the hypervisor belongs to. Applicable only in case the virtual-infrastructure-type is 'VMwareManagement'.
- deleted-by => string, optional
The user who marked the Hypervisor as deleted. This element is present only if the Hypervisor is deleted and include-deleted is passed as true when starting the iteration.
- deleted-timestamp => dp-timestamp, optional
The time and date when the Hypervisor was marked as deleted in DataFabric Manager Server. This element is present only if the Hypervisor is deleted and include-deleted is passed as true when starting the iteration.
- domain-name => string, optional
Name of the windows domain/workgroup to which the HyperVisor belongs to, applicable only in case the virtual-infrastructure-type is 'HyperVManagement'.
- host-service-id => obj-id
Object identifier of the Host Service that manages the Hypervisor.
- ip-address => ip-address
IP Address of the hypervisor.
- virtual-center-reference => virtual-center-reference, optional
Information of the Virtual Center to which the hypervisor belongs to. Applicable only in case the virtual-infrastructure-type is 'VMwareManagement'.
- virtual-infrastructure-type => virtual-infrastructure-type
Type of the virtual server infrastructure the Hypervisor belongs to.
Information about a hypervisor.Fields
- hypervisor-id => obj-id
Object Identifier of the Hypervisor.
- hypervisor-name => obj-name
Name of the Hypervisor.
Virtual Center information.Fields
- deleted-by => string, optional
The user who deleted the Virtual Center object. Present only if include-deleted input field is specified when starting the iteration and the Virtual Center is deleted.
- deleted-timestamp => dp-timestamp, optional
The time and date when Virtual Center object was deleted. Present only if include-deleted input field is specified when starting the iteration and the Virtual Center is deleted.
- host-service-id => obj-id
Object identifier of the Host Service managing the Virtual Center Server.
Information about a hypervisor.Fields
- virtual-center-id => obj-id
Object Identifier of the Virtual Center.
- virtual-center-name => obj-name
Name of the Virtual Center.
Information about Virtual Disk.Fields
- datastore-id => obj-id, optional
Object Id of the Object Datastore in which the Virtual Disk resides. This is applicable only if the virtual-infrastructure-type is 'VMwareManagement'. This element is deprecated, use datastore-reference instead.
- datastore-name => obj-name, optional
Object name of the Datastore in which the Virtual Disk resides. This is applicable only if the virtual-infrastructure-type is 'VMwareManagement'. This element is deprecated, use datastore-reference instead.
- deleted-by => string, optional
The user who deleted the Virtual Disk. This element is present only if the Virtual Disk is deleted and include-deleted is passed as true when starting the iteration.
- deleted-timestamp => dp-timestamp, optional
The time and date when the Virtual Disk was deleted. This element is present only if the Virtual Disk is deleted and include-deleted is passed as true when starting the iteration.
- host-service-id => obj-id
Identifier of the Host Service managing Virtual Disk (i.e the Host Service that managed the Virtual Machine to which the Virtual Disk is assigned).
- vhd-type => string, optional
Indicated the type of Virtual Disk. If the virtual-infrastructure-type is 'HyperVManagement' the possible values are :
- passthrough
- cluster_shared_volume
- boot_disk
- regular
If the virtual-infrastructure-type is 'VMwareManagement' the possible values are :
- raw_device_mapping
- regular
- virtual-infrastructure-type => virtual-infrastructure-type
Type of virtual infrastructure the Virtual Disk belongs to.
- virtual-machine-id => obj-id
Object Id of the Virtual Machine to which this Virtual Disk is assigned. This element is deprecated, use virtual-machine-reference instead.
- virtual-machine-name => obj-name
Object Name of the Virtual Machine to which this Virtual Disk is assigned. This element is deprecated, use virtual-machine-reference instead.
Type of virtual infrastructure. The possible values are :- - VMwareManagement
- HyperVManagement
Fields
Information about a Virtual Machine.Fields
- datacenter-reference => datacenter-reference, optional
Information of the Data Center to which the Virtual Machine is member of in VMware Virtual Center Server. Applicable only in case the virtual-infrastructure-type is 'VMwareManagement'.
- datasets => dataset-reference[]
List of datasets that has this Virtual Machine as a member.
- deleted-by => string, optional
The user who deleted the Virtual Machine. Returned only if include-deleted input element value is true and the Virtual Machine is deleted.
- deleted-timestamp => dp-timestamp, optional
The time and date when the Virtual Machine was marked as deleted. Returned only if include-deleted input element value is true and the Virtual Machine is deleted.
- host-service-id => obj-id
Identifier of the Host Service managing the Virtual Machine.
- is-protected => boolean
true, if Virtual Machine is protected.
- status => string
Indicates the operational state of the Virtual Machine (i.e if Virtual Machine is powered on or not). Valid values are 'on', 'off' or 'unknown'.
- virtual-center-reference => virtual-center-reference, optional
Information of Virtual Center Server which manages the Virtual Machine. Applicable only in case the virtual-infrastructure-type is 'VMwareManagement'.
- virtual-infrastructure-type => virtual-infrastructure-type
Type of virtual infrastructure the virtual machine belongs to.
- virtual-machine-id => obj-id
Object id for the virtual machine.
- virtual-machine-name => obj-name
Name of the virtual machine, this is the name that virtual server administrator specifies when creating Virtual Machine in virtual server management tools like VMware VCenter Server or Microsoft Hyper-V Virtual Machine Manager.
Name and object identifier of the virtual machine object.Fields
- virtual-machine-id => obj-id
Object id of the Virtual Machine to which this Virtual Disk is assigned.
- virtual-machine-name => obj-name
Object name of the Virtual Machine to which this Virtual Disk is assigned.
A reason the volume is not capable of migration. Possible values are:
- 'vol_has_nfs_exports': The volume or any of its qtrees has NFS exports.
- 'vol_has_cifs_shares': The volume or any of its qtrees has CIFS shares.
- 'vol_has_mapped_luns': The volume or any of its qtrees has LUNs.
- 'vol_has_flex_clones': The volume has child clone volumes.
- 'vol_has_unmanaged_snapmirror_rels': There are incoming or outgoing SnapMirror relationships that are not managed by ProtectionManager.
- 'vol_has_unmanaged_snapvault_rels': There are incoming or outgoing SnapVault relationships that are not managed by Protection Manager.
- 'vol_is_storage_root': The volume is the root volume of a storage system or vFiler unit.
- 'vol_has_qtree_storage_root': The volume has at least one qtree which is the root storage of a vFiler unit.
- 'vol_is_traditional': The volume is 'Traditional' volume, not a Flexible volume.
- 'vol_is_cmode': The volume is on a Data ONTAP Cluster-Mode appliance.
- 'vol_is_offline': The volume is offline.
Fields
Space status of the object. This indicates the fullness of the object in terms of whether the percentage of used space with respect to total size of the object has reached the fullness thresholds. Possible values: - ok - when the percentage of used space of the object is within the nearly full and full threshold of the object.
- nearly_full - when the percentage of used space of the object is within the full threshold of the object but has reached or crossed the nearly full threshold.
- full - when the percentage of used space of the object has reached or crossed the full threshold of the object.
Fields
Seconds since 1/1/1970 in UTC. Range: [0..2^31-1].Fields
Volume deduplication information. Optional fields will not be returned if deduplication has never run on the volume.Fields
- dedupe-progress => string, optional
The progress of the current deduplication operation on the volume with information as to which stage of de-duplication is currently in progress and how much data is processed for that stage. For eg: "25 MB Scanned, 20MB Searched, 40MB (20%) Done , 30MB Verified".
- dedupe-status => string, optional
Deduplication operation status of the volume. Possible values: "idle", "active", "pending", or "undoing".
Information about a volume.Fields
- aggregate-id => integer
Identifier of aggregate on which the volume resides.
- aggregate-name => string
Name of aggregate on which the volume resides.
- block-type => file-system-block-type
File system block type of the volume.
- dataset-id => integer, optional
ID of dataset where this qtree is in the primary storage set. Range: [1..2^31-1] This field is deprecated in favor of datasets, which lists all datasets the volume belongs to. It is still populated with one of the dataset ids.
- datasets => dataset-reference[]
List of dataset IDs containing this volume. If is-in-dataset is false, this list will be empty.
- host-id => integer
Identifier of host on which the volume resides.
- host-name => string
Name of host on which the volume resides.
- is-available => boolean, optional
True if this object and all of it's parents are up or online. Only output if the call to iter-start included the "include-is-available" flag.
- is-capable-of-migration => boolean, optional
Indicates whether the volume can be migrated. Returned only when include-migration-info is true in volume-list-info-iter-start call. A volume is considered as capable of migration in a transparent way (using migrate-volume API) when the following conditions holds good for the volume.
- The volume or any of its qtrees has no NFS exports
- The volume or any of its qtrees has no CIFS shares
- The volume or any of its qtrees has no LUNs mapped to storage clients.
- The volume has no child clone volumes.
- There are no incoming and outgoing protection relationships that are not managed by Protection Manager.
- The volume is not the root storage of a storage system or vFiler unit.
- The volume is not of 'Traditional' type.
- The volume is managed by "node" management interface.
- is-dp-ignored => boolean, optional
True if this dataset is intentionally ignored.
- is-in-dataset => boolean
Indicate whether this volume is a member of any dataset.
- is-snapmirror-primary-capable => boolean
True if the volume is capable of being a primary for a SnapMirror relationship. This means the storage system is licensed for SnapMirror.
- is-snapmirror-secondary-capable => boolean
True if the volume is capable of being a secondary for a SnapMirror relationship. This means the storage system is licensed for SnapMirror and the volume is not already a SnapMirror or SnapVault secondary.
- is-snapvault-secondary-capable => boolean
True if the volume is capable of being the destination of SnapVault transfers. This means the storage system is licensed as a SnapVault secondary and the volume is not a SnapMirror destination.
- next-scheduled-backup-time => dp-timestamp, optional
Time when next scheduled backup job will be run on this volume. Value is the time in seconds since 00:00:00 Jan 1, 1970, UTC. This is computed and returned only when include-next-scheduled-backup-time is true in volume-list-info-iter-start api. This value is returned only for volumes in datasets that have a protection policy with relevant schedules on the connections and nodes. When the volume is in the primary node of the dataset and if the protection policy assigned to the dataset has a local snapshot schedule associated, this returns the next timestamp when the local snapshot job will run for the dataset. When the volume is in a non-primary node of the dataset and the incoming or outgoing connection of the protection policy has a backup/mirror schedule, this will return the least timestamp when the next backup/mirror job will run for the dataset for those connection(s).
- snapvault-secondary-schedule-name => string, optional
Name of backup schedule associated with secondary volume. Not present if there is no schedule.
- space-guarantee => string, optional
The space reservation style associated with the flexible volume. Possible values: - volume - Indicates that the entire size of the volume is preallocated.
- file - Indicates that the space will be preallocated for all the space-reserved files and LUNs within the volume. Storage is not preallocated for files and LUNs that are not space-reserved. Writes to these can fail if the underlying aggregate has no space available to store the written data.
- none - Indicates that no space will be preallocated.
- file(disabled) - If a volume with file guarantee has been brought online when the aggregate has insufficient free space to preallocate to the volume.
- volume(disabled) - If a volume with volume guarantee has been brought online when the aggregate has insufficient free space to preallocate to the volume.
This field will not be present for traditional volumes. This field does not appear if volume-state of the flexible volume is restricted or offline.
- volume-full-threshold => integer
The value (as an integer percentage) of the fullness threshold used to generate a "volume full" event for this volume and to compute the volume-space-status. The order in which the thresholds are returned is: - If a volume is governed by a provisioning policy, then the thresholds in the provisioning policy are returned.
- If the thresholds are set at volume level, then those thresholds are returned.
- If the volume is neither governed by a provisioning policy nor if thresholds are set at volume level, then the value returned will be empty.
If the value is empty, then the global setting for volume full threshold is considered and this can be obtained from dfm-get-option API with option-name as "volFullThreshold". Range: [0..1000]
- volume-id => integer
Identifier of the volume.
- volume-name => string
Name of the volume.
- volume-nearly-full-threshold => integer
The value (as an integer percentage) of the fullness threshold used to generate a "volume nearly full" event for this volume and to compute the volume-space-status. The order in which the thresholds are returned is: - If a volume is governed by a provisioning policy, then the thresholds in the provisioning policy are returned.
- If the thresholds are set at volume level, then those thresholds are returned.
- If the volume is neither governed by a provisioning policy nor if the thresholds are set at volume level, then the value returned will be empty.
If the value is empty, then the global setting for volume nearly full threshold is considered and this can be obtained from dfm-get-option API with option-name as "volNearlyFullThreshold". Range: [0..1000]
- volume-space-status => object-space-status
Space status of the volume. This indicates the fullness of the volume in terms of whether the percentage of used space with respect to total size of the volume has reached or crossed the fullness thresholds given in volume-nearly-full-threshold and volume-full-threshold.
- volume-state => string
State of volume. Possible values are: - initializing
- failed
- offline
- online
- partial
- restricted
- unknown
Information on qtrees in the volumeFields
- protected-qtree-count => integer
Total number of qtrees that are in datasets.
- unprotected-qtree-count => integer
Total number of qtrees that are not in any dataset.
Collected size information about a volume. Optional items will not be returned if DFM does not know the value.Fields
- actual-volume-size => integer, optional
Actual size in bytes of the volume. For volumes which are destinations of a Volume SnapMirror relationship, the actual size of the volume may differ from the logical size (reported by the df command). The logical size for such volumes is equal to size of the source volume. For all other volume actual-volume-size will be same as total size.
- afs-avail => integer, optional
Number of bytes available in active file system. This will be (afs-total - afs-used) or the available space in the aggregate, whichever is lower. Range: [0..2^63-1]
- afs-data => integer, optional
Number of bytes used to hold user data in active file system. This should match what you'd get if you added up the file sizes. This includes data and hole reserves, if any. Range: [0..2^63-1]
- afs-used => integer, optional
Number of bytes used to hold active file system data. This is what "df" reports as used for the volume. It includes data, hole reserves, overwrite reserves and snapshot overflow. Range: [0..2^63-1]
- afs-used-per-day => integer
Number of bytes used per day in the active file system of the volume. This can be either positive or negative depending on the growth of used space in the volume. Range: [-2^44-1..2^44-1]
- maximum-size => integer, optional
Maximum size in bytes that this volume will be grown up to automatically by ONTAP. This is returned only if is-autosize-enabled is true. Range: [0..2^63-1]
- overwrite-reserve-total => integer, optional
Total number of bytes reserved for data overwrites. This is the space reserved for overwriting LUNs and other space-reserved files when the volume has snapshots and afs-avail is zero. Range: [0..2^63-1]
- snapshot-reserve-avail => integer, optional
Number of available bytes in snapshot reserve for this volume. If snapshot-reserve-used is greater than snapshot-reserve-total, this value will be zero. Range: [0..2^63-1]
- snapshot-reserve-used => integer, optional
Total number of bytes used to hold snapshot data. This can be greater than the snapshot reserve size but will not include any space used out of the overwrite reserve. Range: [0..2^63-1]
- space-allocated-from-aggr => integer
Bytes allocated to the volume from the aggregate. In case of volumes with space-guarantee as "volume", it would be the total size of the volume, where as in case of volumes with "file" or "none" guarantee volumes, it would be the used space from the aggregate. Range [0..2^44-1]
Copyright (c) 1994-2013 NetApp, Inc. All rights reserved.
The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).