NAME

OCUMClassicAPI - Contains the definitions and description of API Bindings for OnCommand Unified Manager server 5.2 or earlier


SYNOPSIS

         my $s = NaServer->new($server, 1, 0); # create NaServer (server context)
         $s->set_admin_user('admin', 'password'); # provide username and password
         $s->set_server_type('DFM'); # set the server type to DFM for OnCommand Unified Manager 5.2 or earlier
         eval{ 
                 my $output = $s->dfm_about(); # use binding for dfm-about API
                 print "OCUM server version is: $output->{version}\n"; # extract the required parameter from output
         };
         if($@) { # check for any exception
                 my ($error_reason, $error_code) = $@ =~ /(.+)\s\((\d+)\)/;  # parse out error reason and error code
                 print "Error Reason: $error_reason  Code: $error_code\n"
         }


DESCRIPTION

NetApp Manageability SDK 5.3.1 provides support for Perl API bindings for both Data ONTAP APIs and OnCommand Unified Manager APIs. The Perl API bindings libraries contain interfaces to establish a connection with either the Data ONTAP server or the OnCommand Unified Manager server. By using these libraries, you can create Perl applications to access and manage the Data ONTAP server or OnCommand Unified Manager server.

NetApp Manageability SDK 5.3.1 Perl API bindings provide a runtime library NaServer.pm, which is available at <installation_folder>/lib/perl/NetApp. This library file enables you to establish a server connection, send requests and receive responses, and interpret error messages. Each binding can be called as a subroutine of NaServer module which in turn invokes the corresponding Data ONTAP or OnCommand Unified Manager API.


API BINDINGS

aggregate_list_info_iter_end

The aggregate-list-info-iter-* set of APIs are used to retrieve the list of aggregates. aggregate-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the aggregate-list-info-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

aggregate_list_info_iter_next

For more documentation please check aggregate-list-info-iter-start. The aggregate-list-info-iter-next API is used to iterate over the members of the aggregates stored in the temporary store created by the aggregate-list-info-iter-start API.

Inputs

Outputs

aggregate_list_info_iter_start

The aggregate-list-info-iter-* set of APIs are used to retrieve the list of aggregates in DFM. The aggregate-list-info-iter-start API is used to load the list of aggregates into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the aggregates in the temporary store. If aggregate-list-info-iter-start is invoked twice, then two distinct temporary stores are created. If neither aggregate-name-or-id or aggr-group-name-or-id are provided, all aggregates will be listed. If either, but not both are provided, the aggregate or all aggregates in the group will be listed respectively. If aggregate-name-or-id and aggr-group-name-or-id are provided, the aggregate will be listed only if it is under the specified group.

Inputs

Outputs

aggregate_modify

Modify an aggregate's information. If modifying of one property fails, nothing will be changed.
Error Conditions:

Inputs

Outputs

aggregate_space_management_add_operation

Add a space management operation as part of this space management session. The actual operation is not carried out unless aggregate-space-management-commit is called. If this operation cannot be carried out, then ESPACEMGMTCONFLICTOP will be returned The following rules apply when adding an operation to the session. If this operation leads to errors in dry-run-results that are returned, then this operation is not added to the session.

Inputs

Outputs

aggregate_space_management_begin

Open a space management session to run space management operations on an aggregate.

This allows adding a set of space management operations in a session, getting the difference in space consumption due to these set of operations and then committing all the operations. A space management session must be started before invoking the following ZAPIs:

Use aggregate-space-management-commit to commit the changes and to start jobs which will carry out the space management operations.

Use aggregate-space-management-rollback to rollback the session. This will not submit any jobs for space management operations.

After 24 hours, a session can be opened on the same aggregate by another client without the force option. This will cause any space management operations that were part of the session to be discarded.

Inputs

Outputs

aggregate_space_management_commit

Commit the space management operations added as part of this space management session.

The session that was opened on the aggregate will be released once all the space management jobs that were part of the session are queued to be executed eventually.

This will not wait for the jobs to be executed.

Use the dry-run option to test the commit. It returns a set of dry-run results for each space management operation that was added as part of this session.

dry-run-results is a set of steps that the server will take to carry out the space management operation

When dry-run is true, it also returns the projected used and committed space of the aggregate on which the space management session was opened and other dependent aggregates in case of migration operations.

If dry-run is false, then before the call returns, the system submits jobs to the provisioning engine to execute.

Inputs

Outputs

aggregate_space_management_remove_operation

Remove a space management operation that was added as part of this space management session.

Inputs

Outputs

aggregate_space_management_rollback

Release the session opened for space management on an aggregate. All the space management operations that were submitted as part of the space management session would be discarded.

Inputs

Outputs

alarm_create

Create a DFM alarm. The alarm-info element specifies all the parameters of the new alarm.

Note that it is possible to specify a combination of the event-name, event-severity and event-class such that this alarm will never be triggered. It is the user's responsibility to verify these settings are useful.

Error Conditions:

Inputs

Outputs

alarm_destroy

Delete an alarm.

Error Conditions:

Inputs

Outputs

alarm_get_defaults

Get the default values of attributes defined by this ZAPI set.

Inputs

Outputs

alarm_list_info_iter_end

Ends listing of alarms.

Inputs

Outputs

alarm_list_info_iter_next

Returns items from list generated by alarm-list-info-iter-start.

Inputs

Outputs

alarm_list_info_iter_start

List all configured alarms.

Inputs

Outputs

alarm_modify

Modify settings of a DFM alarm.

Error Conditions:

Inputs

Outputs

alarm_test

Test an alarm by performing it's trigger actions. The test will be performed irrespective of whether the alarm is enabled or disabled.

Error Conditions:

Inputs

Outputs

api_proxy

Proxy an API request to a third party and return the API response.

Inputs

Outputs

application_policy_copy

Create a new application policy by making a copy of an existing policy. The new policy created using this ZAPI has the same set of properties as the existing policy.
Error conditions:

Inputs

Outputs

application_policy_create

This API creates a new application policy. Error conditions:

Inputs

Outputs

application_policy_destroy

Destroy a application policy. This removes it from the database.

If the policy has been applied to any dataset nodes, then the destroy operation fails; it must first be disassociated from all the dataset nodes to which it has been associated and then destroyed. Error conditions:

Inputs

Outputs

application_policy_edit_begin

Create an edit session and obtain an edit lock on an application policy to begin modifying the policy.

An edit lock must be obtained before invoking application-policy-modify.

Use application-policy-edit-commit to end the edit session and commit the changes to the database.

Use application-policy-edit-rollback to end the edit session and discard any changes made to the policy.

24 hours after an edit session on a policy begins, any subsequent call to application-policy-edit-begin for that same policy automatically rolls back the existing edit session and begins a new edit session, just as if the call had used the force option. If there is no such call, the existing edit session simply continues and retains the edit lock.


Error conditions:

Inputs

Outputs

application_policy_edit_commit

Commit changes made to an application policy during an edit session into the database.

If all the changes to the policy are performed successfully, the entire edit is committed and the edit lock on the policy is released.

If any of the changes to the policy are not performed successfully, then the edit is rolled back (none of the changes are committed) and the edit lock on the policy is released.

Use the dry-run option to test the commit. Using this option, the changes to the policy are not committed to the database.


Error conditions:

Inputs

Outputs

application_policy_edit_rollback

Roll back changes made to an application policy. The edit lock on the policy will be released after the rollback.
Error conditions:

Inputs

Outputs

application_policy_list_iter_end

Terminate a list iteration that had been started by a call to application-policy-list-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions:

Inputs

Outputs

application_policy_list_iter_next

Retrieve the next series of policies that are present in a list iteration created by a call to application-policy-list-iter-start. The server maintains an internal cursor pointing to the last record returned. Subsequent calls to application-policy-list-iter-next return the next maximum records after the cursor, or all the remaining records, whichever is fewer.
Error conditions:

Inputs

Outputs

application_policy_list_iter_start

Begin a list iteration over application policies.

After calling application-policy-list-iter-start, you continue the iteration by calling application-policy-list-iter-next zero or more times, followed by a call to application-policy-list-iter-end to terminate the iteration.


Error conditions:

Inputs

Outputs

application_policy_modify

This ZAPI modifies the application policy settings of an existing policy in the database with the new values specified in the input. Note: type of application policy cannot be modified after creation. Before modifying the policy, an edit lock has to be obtained on the policy object.
Error conditions:

Inputs

Outputs

audit_log_add_entry

Log an entry in the audit log file of DataFabric Manager

Inputs

Outputs

cifs_domain_list_info_iter_end

Terminate a view list iteration and clean up any saved info.

Inputs

Outputs

cifs_domain_list_info_iter_next

Returns items from a previous call to cifs-domain-list-info-iter-start

Inputs

Outputs

cifs_domain_list_info_iter_start

Initiates a query for a list of cifs domains on hosts discovered by DFM.

Inputs

Outputs

client_registry_destroy

Remove one or all name/value pairs from the persistent store.

Inputs

Outputs

client_registry_get

Retrieve one or all name/value string pairs.

Inputs

Outputs

client_registry_set

Store one or more name/value string pairs.

Inputs

Outputs

comment_field_create

Create a comment field.
Error conditions:

Inputs

Outputs

comment_field_destroy

Destroy a comment field.
Error conditions:

Inputs

Outputs

comment_field_list_info_iter_end

Terminate a list iteration that had been started by a call to comment-field-list-info-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions:

Inputs

Outputs

comment_field_list_info_iter_next

Get the next set of comment fields in the iteration started by comment-field-list-info-iter-start.
Error conditions:

Inputs

Outputs

comment_field_list_info_iter_start

Starts iteration to list all comment fields.
Error conditions:

Inputs

Outputs

comment_field_modify

Modify a comment field. Currently only the name of the comment field can be modified.
Error conditions:

Inputs

Outputs

comment_field_values_list_info_iter_end

Terminate a list iteration that had been started by a call to comment-field-values-list-info-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions:

Inputs

Outputs

comment_field_values_list_info_iter_next

Get the next set of comment field values in the iteration started by comment-field-values-list-iter-start
Error conditions:

Inputs

Outputs

comment_field_values_list_info_iter_start

Start iteration on listing comment field values for objects.
Error conditions:

Inputs

Outputs

comment_set_object_value

Set the value of a comment field for a specified managed object.
Error conditions:

Inputs

Outputs

dataset_add_member

Add members to an existing dataset.

This API is for adding direct member objects to one or more storage sets in the dataset. Each storage set is identified by the data protection policy node associated with it.

The types of storage objects allowed to be added vary:

It is legal to add storage objects to multiple storage sets in a single call.

Inputs

Outputs

dataset_add_member_by_dynamic_reference

Add dynamic reference to an existing dataset.

By adding a dynamic reference, the volumes, qtrees and ossv directories referred to by the dynamic reference become implicit (indirect) members of the dataset. A dynamic reference can be added to a particular storage set in the dataset by specifying the data protection policy node associated with it. If necessary a new storage set is first created automatically before adding storage objects to it.

The types of storage objects allowed to be added vary:

It is legal to add storage objects to multiple storage sets in a single call.

Inputs

Outputs

dataset_add_resourcepool

Add resource pool to a single storage set that is part of a dataset. The storage set is specified implicitly by the name of the policy node that maps to it.

Within the same edit session in which you call dataset-add-resourcepool, you may also add or remove members or dynamic references, or change the dataset's name, description, is-dp-ignored, is-dp-suspended, or is-suspended. You may not change the data set's protection policy or storage sets within the same edit session.


Error conditions:

Inputs

Outputs

dataset_add_to_exclusion_list

Add datastore objects to a dataset's exclusion list.

This API is for adding datastore objects to a dataset's exclusion list.

Users may want to exclude a datastore when adding a virtual machine to the dataset. Virtual Machine may be on multiple datastores and one of the datastores will contain swap files which don't need to be backed up. If a datastore is excluded, relationships and backups for that datastore will not be created. If a relationship already exists, it will be left alone and only the backups will stop. There is no impact on suspend/resume.

Inputs

Outputs

dataset_begin_failover

Begin failover of a dataset.

This API begins the process of failing over a dataset to its disaster recovery storage. The failover process will break any mirror relationships between the primary and DR storage objects and make the secondary storage available for use.

This API may only be invoked if the DR state of the data set is "ready".

Specifically, when the ZAPI runs, the following actions will take place:

  1. The DR state of the dataset will immediately change to "failing_over".
  2. Any pending conformance tasks will be cancelled.
  3. Any jobs running against this dataset will be aborted.
  4. A failover job will be started to break any mirrors between the primary storage set and the DR storage set
  5. Finally, the DR state of the dataset will be updated to either "failed_over" or "failover_error" based on whether the failover job succeeded or not.

Inputs

Outputs

dataset_begin_failover_script_test

Begin test failover of a dataset that only runs the the failover scripts and does not dequeue tasks, abort jobs or change the dr-state.

Inputs

Outputs

dataset_change_dr_state

Change the disaster recovery state of a dataset.

This API sets the DR state of a dataset to a new value. Users may perform the following DR state transitions using this ZAPI:

If the "allow-internal-transitions" element is present and true, the caller may make additional state transitions:

These state transitions are intended to be used by Protection Manager server processes as part of a failover or failback sequence. Any attempt to perform other state transition will fail with an error code of EDATASETWRONGDRSTATE.

The DR state of a dataset will also change in one special case:

When the edit session with these modifications is committed, the DR state will be set back to "ready".

Inputs

Outputs

dataset_compute_usage_metric

This API computes space and/or IO usage metrics for a dataset. dataset-space-metric-list-info-iter-* set of APIs can be used to retrieve the computed space metrics and dataset-io-metric-list-info-iter-* set of APIs can be used to retrieve the computed IO metrics.

Inputs

Outputs

dataset_conform_begin

Begin a conformance run on a dataset, to attempt to bring it into conformance with its data protection policy and provisioning policy. A conformance run consists of two main steps: In addition, the dataset's conformance status is updated at various points during the conformance run. Whenever the conformance status is updated, an event of type "dataset.conformance" is generated.

Successful completion of this ZAPI indicates that: the conformance check has completed successfully, the dataset's conformance status has been updated based on the results of the conformance check, and the system has begun to take any needed conformance actions.

After the ZAPI returns, the system continues to perform conformance actions in the background, until all actions complete. Once all actions have completed, the dataset's conformance status is again updated. Note that at present, there is no ZAPI interface for determining when all actions have completed.

If no policy has been assigned to the dataset, the conformance run completes immediately and performs no conformance actions.


Error conditions:

Inputs

Outputs

dataset_create

Create a new, empty dataset.

Inputs

Outputs

dataset_destroy

Destroy a dataset. The dataset must be empty unless the "force" option is used.

Inputs

Outputs

dataset_dynamic_reference_list_info_iter_end

Ends iteration of the dynamic references in the dataset.

Inputs

Outputs

dataset_dynamic_reference_list_info_iter_next

Get next records in the iteration started by dataset-dynamic-reference-list-info-iter-start

Inputs

Outputs

dataset_dynamic_reference_list_info_iter_start

Starts iteration to list the dynamic references in the dataset. Volumes are not considered dynamic references because they are members. (even though they can contain qtrees)

Inputs

Outputs

dataset_edit_begin

Obtain an edit lock to start modifying a dataset.

Besides locking the dataset itself, all storage sets in the dataset are locked, as well as the data protection policy if one is assigned.

An edit lock must be obtained before invoking the following ZAPIs:

Use dataset-edit-commit to commit the changes to the database.

Use dataset-edit-rollback to undo the changes made to the dataset.

After 24 hours, the lock can be taken by another client without the force option. This will cause any edits pending on the aborted session to be lost.

Inputs

Outputs

dataset_edit_commit

Commit changes made to a dataset into the database.

The edit lock on the dataset will be released after the changes have been successfully committed.

Use the dry-run option to test the commit. It invokes the conformance checker to return a list of actions that would be taken should the changes be actually committed. The dry-run option also returns a list of high level alerts to notify the user of rebaseline operations or system level issues related to successful conformance.

If dry-run is false, then before the call returns, the system begins a conformance run on the dataset. (See dataset-conform-begin for a description of conformance runs.) If the system is to perform a conformance run on the dataset it will be done with the current dataset edit session value for assume-confirmation. The default value for assume-confirmation is initially true when the edit session begins, but may be altered by certain changes to the dataset made through the use of dataset-modify. The optional assume-confirmation option may be used to specify if user confirmation is to be assumed or not for this dataset-edit-commit. One key, and sometimes undesirable, resolvable action that requires user confirmation is the possible re-baseline of a relationship.

Inputs

Outputs

dataset_edit_rollback

Roll back changes made to a dataset. The edit lock on the dataset will be released after the rollback.

Inputs

Outputs

dataset_exclusion_list_info_iter_end

Ends iteration of dataset exclusion list.

Inputs

Outputs

dataset_exclusion_list_info_iter_next

Get next records in the iteration started by dataset-exclusion-list-info-iter-start

Inputs

Outputs

dataset_exclusion_list_info_iter_start

Starts iteration to retrieve the backup exclusion list of the dataset.

Inputs

Outputs

dataset_io_metric_list_info_iter_end

Ends iteration to list dataset I/O usage metric.

Inputs

Outputs

dataset_io_metric_list_info_iter_next

Get next few records in the iteration started by dataset-io-metric-list-info-iter-start.

Inputs

Outputs

dataset_io_metric_list_info_iter_start

Starts iteration to list dataset's I/O usage measurements.

Inputs

Outputs

dataset_list_info_iter_end

Ends iteration to list datasets.

Inputs

Outputs

dataset_list_info_iter_next

Get next few records in the iteration started by dataset-list-info-iter-start.

If a dataset has a data protection policy assigned to it, the has-protection-policy field will be true. If the client has suspended the dataset, has-protection-policy is still true, but is-dp-suspended and is-suspended fields are also set to true to reflect this. When the client sets is-dp-ignored to true, nothing changes, except that, when the client requests the list of datasets which are not ignored, the ignored datasets will not be returned.

Inputs

Outputs

dataset_list_info_iter_start

Starts iteration to list datasets.

Inputs

Outputs

dataset_member_dedupe_abort

Abort in-progress dedupe operation on the given volume member of the dataset.

Inputs

Outputs

dataset_member_dedupe_start

Start deduplication operation on specified volume member of the given dataset.

Inputs

Outputs

dataset_member_delete_snapshots

Delete Snapshot copies of a volume which is a member of the effective primary node of the dataset.

The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the root node of the dataset. The effective primary of a disaster recovery dataset in the DR state of "failed_over" is the disaster recovery capable node of the dataset.

The deletion of Snapshot copies happens in the background.

A provisioning job id is returned in dataset-edit-commit zapi that represents the job which will delete the specified Snapshot copies. The status of the job can be checked using dp-job-list-iter ZAPIs with the given job-id. Error conditions:

Inputs

Outputs

dataset_member_list_info_iter_end

Ends iteration of dataset members.

Inputs

Outputs

dataset_member_list_info_iter_next

Get next records in the iteration started by dataset-member-list-info-iter-start

Inputs

Outputs

dataset_member_list_info_iter_start

Starts iteration to list members of the dataset. Dynamic references are not returned by this API, nor are objects only associated with the dataset by their inclusion in a dynamic reference.

Inputs

Outputs

dataset_member_undedupe_start

Start undeduplication operation on specified volume member of the given dataset.

Inputs

Outputs

dataset_missing_member_list_info_iter_end

Ends iteration of missing dataset members.

Inputs

Outputs

dataset_missing_member_list_info_iter_next

Get next records in the iteration started by dataset-member-list-info-iter-start

Inputs

Outputs

dataset_missing_member_list_info_iter_start

Starts iteration to list members of the dataset that have gone missing. The way the server determines if an object was not intentionally removed from the dataset is:
  1. The object is still a dataset member.
  2. The object's objDeleted flag is set.

Inputs

Outputs

dataset_modify

Modify attributes for a dataset.

Inputs

Outputs

dataset_modify_node

Modify attributes of a single storage set that is part of a dataset. The storage set is specified implicitly by the name of the data protection policy node that maps to it. You may change the storage set's resource pool and timezone using this call, but not its name or description.

Within the same edit session in which you call dataset-modify-node, you may also add or remove members or dynamic references, or change the dataset's name, description, is-dp-ignored, is-dp-suspended, or is-suspended. You may not change the data set's policy or storage sets within the same edit session.


Error conditions:

Inputs

Outputs

dataset_provision_member

Provision a new member into the effective primary node of a dataset.

The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the dataset root node. The effective primary node of a disaster recovery dataset in the DR state of "failed_over" is the disaster recovery capable node of the dataset.

Error conditions:

Inputs

Outputs

dataset_remove_from_exclusion_list

Remove datastore objects from a dataset's exclusion list.

This API is for removing datastore objects from a dataset's exclusion list.

Inputs

Outputs

dataset_remove_member

Remove member from a dataset. Only members explicitly added to the dataset (direct members) can be removed.

If "destroy" is true, even indirect members can be destroyed on the storage system and removed from the dataset.

If "destroy" is true, then it is applicable only on members of the effective primary node of the dataset.

The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the dataset root node. The effective primary node of a disaster recovery dataset in the DR state of "failed_over" is the disaster recovery capable node of the dataset.

The destroy operation happens in the background.

A provisioning job id is returned in dataset-edit-commit api that represents the job which will resize the member as specified. The status of the job can be checked using dp-job-list-iter ZAPIs with the given job-id.

Inputs

Outputs

dataset_remove_member_by_dynamic_reference

Remove dynamic references from a dataset. Only dynamic_references explicitly added to the dataset can be removed.

Inputs

Outputs

dataset_remove_resourcepool

Remove resource pool from a single storage set that is part of a dataset. The storage set is specified implicitly by the name of the policy node that maps to it.

Within the same edit session in which you call dataset-remove-resourcepool, you may also add or remove members or dynamic references, or change the dataset's name, description, is-dp-ignored, is-dp-suspended, or is-suspended. You may not change the data set's protection policy or storage sets within the same edit session.


Error conditions:

Inputs

Outputs

dataset_replace_primary_members

Replace primary members and relationships after a failover.

If a secondary volume or qtree is specified, it replaces the primary member for just that secondary volume or qtree. If neither are specified, it replaces all primary members that need to be replaced.

If dry-run is specified, it returns the results of the operation that would be taken should the operation be committed.

Inputs

Outputs

dataset_resize_member

Resize, change maximum capacity and change snap reserve for a dataset member on the effective primary node of the dataset.

The effective primary node for a non disaster recovery capable dataset or a disaster recovery capable dataset not in the DR state of "failed_over" is the dataset root node. The effective primary of a disaster recovery data set in the DR state of "failed_over" is the disaster recovery capable node of the dataset.

The resize operation happens in the background.

A provisioning job id is returned in dataset-edit-commit zapi that represents the job which will resize the member as specified. The status of the job can be checked using dp-job-list-iter ZAPIs with the given job-id. Error conditions:

Inputs

Outputs

dataset_set

Set dataset options.

Inputs

Outputs

dataset_set_storageset

Change the storage sets associated with policy nodes. It is legal to change many storage set/node mappings in an edit session.

Inputs

Outputs

dataset_space_metric_list_info_iter_end

Ends iteration to list dataset space metric.

Inputs

Outputs

dataset_space_metric_list_info_iter_next

Get next few records in the iteration started by dataset-space-metric-list-info-iter-start.

Inputs

Outputs

dataset_space_metric_list_info_iter_start

Starts iteration to list dataset's space usage measurements.

Inputs

Outputs

dataset_update_dr_status

Update disaster recovery status for a dataset.

Inputs

Outputs

dataset_update_protection_status

Update protection status for a dataset.

Inputs

Outputs

dfm_about

Retrieve information currently provided by the 'dfm about' command.

Inputs

Outputs

dfm_backup_schedule_disable

Disable an existing backup schedule.

Inputs

Outputs

dfm_backup_schedule_enable

Enable an existing backup schedule.

Inputs

Outputs

dfm_backup_schedule_get

Get the information about a DFM database backup schedule.

Inputs

Outputs

dfm_backup_schedule_set

API to configure DFM database backup schedules. A schedule can be of 'snapshot' or 'archive' type backup. Archive backups can be scheduled at 'daily' or 'weekly' period. Snapshot backups can be scheduled at 'hourly', 'daily' or 'weekly' period or everyday at regular intervals starting from a specified time. Hourly backups require the 'minute' at which the backup schedule is to run every hour. Daily backup schedule requires 'hour' and 'minute' at which the backup schedule is to run. Weekly backup schedule requires the day of the week and time (hh:mm) at which the backup schedule is to run. A backup schedule can also be set to run every day at regular hourly repeat intervals starting from a specified hour:minute. Only one schedule can be set for creating DFM backups.

Inputs

Outputs

dfm_backup_start

API to start a database backup immediately.

Inputs

Outputs

dfm_backup_status

Get information about a DFM database backup.

Inputs

Outputs

dfm_get_api_statistics

Retrieve information about the API call frequencies on the DFM server

Inputs

Outputs

dfm_get_resource_property_values

Gets the list of resource properties and the values that can be set as filters for thresholds. The list of resource properties are pre-defined, but the values are obtained from the current set of values in the database

Inputs

Outputs

dfm_monitor_timestamp_list

Returns the monitoring timestamps of a host.

Inputs

Outputs

dfm_object_refresh

Request monitors be scheduled to run to refresh the information of the object specified. The monitors to be scheduled to run can be specified implicitly using child-type or explicitly by providing monitor-names. If both child-type and monitor-names are specified, it will be treated as an error.

Inputs

Outputs

dfm_objects_get_status

Get status for DFM objects This api always returns true. It returns the status for all the objects that are passed in. A object-name-or-id of "0" indicates the "global group" Privelege required is read

Inputs

Outputs

dfm_related_objects_list_info

Retrieve information about objects related to a DFM object. This api takes an object as input and returns the information about parent objects of that object, resource groups, datasets and resource pools the object belongs to and objects that belong to the specified object. Privilege required is DFM.Database.Read on the specified object. Parent output objects are returned only if the authenticated user has DFM.Database.Read privilege on that parent object. For e.g. group to which an object belongs is returned only if the authenticated user has DFM.Database.Read privilege on that group.

Inputs

Outputs

dfm_server_list_diagnostic_info

Retrieve server diagnostic information

Inputs

Outputs

dfm_snmp_setting_add

Add a SNMP credential setttings for a host or network in dfm. This credential will be used when discovering networks in dfm.

Inputs

Outputs

dfm_snmp_setting_delete

Delete networks from dfm so that discovery should not happen. Either snmp-id or both network-address and prefix-length has to be provided. If the user provides both or does not provide any, it will be considered as invalid input.

Inputs

Outputs

dfm_snmp_setting_modify

Modify SNMP credential in dfm so that network discovery should happen using new SNMP credential.

Inputs

Outputs

dfm_snmp_settings_list_info

Returns list of SNMP credential settings available in the dfm database Either snmp-id or both network-address and prefix-length has to be provided. If user provides both, it will be considered as invalid input. If user does not provide any, then all the snmp credentials will be returned.

Inputs

Outputs

dfm_user_priv_get

Retrieve current user's global privilege. This api is no longer the preferred way to getting user privileges. Use rbac-access-check.

Inputs

Outputs

dfm_schedule_content_get

Get the content of a given schedule.

Inputs

Outputs

dfm_schedule_create

Create a new schedule with the given name. The schedule type may be daily, weekly, or monthly.

Inputs

Outputs

dfm_schedule_daily_add

Create a single schedule within a daily schedule.

Inputs

Outputs

dfm_schedule_daily_delete

Delete a single schedule within a daily schedule.

Inputs

Outputs

dfm_schedule_daily_modify

Modify a single schedule within a daily schedule. Sample schedules cannot be modified.

Inputs

Outputs

dfm_schedule_dependency

Return a list of other DP policies, report schedule and schedules using the specified schedule.

Inputs

Outputs

dfm_schedule_destroy

Delete a schedule with the given name or ID. A schedule that is used by another schedule(s) may not be deleted and an error will be returned. Sample schedules cannot be destroyed.

Inputs

Outputs

dfm_schedule_hourly_add

Create an hourly schedule within a daily schedule. An hourly schedule specifies the frequency of schedules to be run within the start time and end time.

Inputs

Outputs

dfm_schedule_hourly_delete

Delete an hourly schedule within a daily schedule.

Inputs

Outputs

dfm_schedule_hourly_modify

Modify an hourly schedule within a daily schedule. Sample schedules cannot be modified.

Inputs

Outputs

dfm_schedule_list

List all existing schedule IDs and types.

Inputs

Outputs

dfm_schedule_list_info_iter_end

Tell the DFM station that the temporary store associated with the specified tag is no longer necessary

Inputs

Outputs

dfm_schedule_list_info_iter_next

Iterate over the list of schedules stored in the temporary store. The DFM internally maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.

Inputs

Outputs

dfm_schedule_list_info_iter_start

The dfm-schedule-list-info-iter-* set of APIs are used to retrieve a list of schedule contents

Inputs

Outputs

dfm_schedule_modify

Modify a schedule's details in the database. When the zapi is called, all details within the schedule will be removed and replace by the new details specified in schedule-content. Sample schedules cannot be modified. schedule-id and schedule-type cannot be modified.

Inputs

Outputs

dfm_schedule_monthly_add

Specify a single schedule within a monthly schedule. Either day-of-month, or both week-of-month and day-of-week must be specified.

Inputs

Outputs

dfm_schedule_monthly_delete

Delete a single schedule within a monthly schedule

Inputs

Outputs

dfm_schedule_monthly_modify

Modify a single schedule within a monthly schedule. Sample schedules cannot be modified.

Inputs

Outputs

dfm_schedule_monthly_subschedule_set

Specify a sub-schedule to be used by a monthly schedule. On top of the individual monthly events, a monthly schedule may only have 1 daily subschedule OR 1 weekly schedule. If this monthly schedule already has a daily or weekly schedule, this command replaces the old one.

Inputs

Outputs

dfm_schedule_monthly_subschedule_unset

Unset a sub-schedule used by a monthly schedule.

Inputs

Outputs

dfm_schedule_rename

Rename a schedule. Sample schedules cannot be renamed.

Inputs

Outputs

dfm_schedule_weekly_add

Specify a single schedule within a weekly schedule.

Inputs

Outputs

dfm_schedule_weekly_delete

Delete a single schedule within a weekly schedule.

Inputs

Outputs

dfm_schedule_weekly_modify

Modify a single schedule within a weekly schedule. Sample schedules canot be modified.

Inputs

Outputs

dfm_schedule_weekly_subschedule_add

Specify which daily schedule will be used on a certain range of days within a weekly schedule

Inputs

Outputs

dfm_schedule_weekly_subschedule_delete

Specify which daily schedule to be deleted within a weekly schedule

Inputs

Outputs

dfm_schedule_weekly_subschedule_modify

Specify which daily schedule will be used on a certain range of days within a weekly schedule. Permenent of sample schedules cannot be modified.

Inputs

Outputs

disk_list_info_iter_end

The disk-list-info-iter-* set of APIs are used to retrieve the list of disks. disk-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the disk-list-info-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

disk_list_info_iter_next

For more documentation please check disk-list-info-iter-start. The disk-list-info-iter-next API is used to iterate over the members of the disks stored in the temporary store created by the disk-list-info-iter-start API.

Inputs

Outputs

disk_list_info_iter_start

The disk-list-info-iter-* set of APIs are used to retrieve the list of disks in DFM. disk-list-info-iter-start returns the disk belonging to objects specified. It loads the list of disks into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the disks in the temporary store. If disk-list-info-iter-start is invoked twice, then two distinct temporary stores are created.

Inputs

Outputs

dp_backup_content_list_iter_end

Ends iteration to list file contents of a backup.

Inputs

Outputs

dp_backup_content_list_iter_next

Get next few records in the iteration started by dp-backup-content-list-iter-start.

Inputs

Outputs

dp_backup_content_list_iter_start

Starts iteration to list contents of a backup. Files and directories directly under specified path are listed. In order to list all files recursively, multiple invocations of this API are necessary.

Inputs

Outputs

dp_backup_get_location

Given a backup-id or the backup-version and a list of member-ids or member-names, returns the snapshot name, volume that it exists on as well as the secondary qtree associated with the member id. If more than one snapshots match the backup-d or backup-version, only one, the first item in the resulting list, is returned. Multiple matches occur when the same backup version exists on multiple nodes. In case of mutiple matches which snapshot gets picked is unspecified.

Inputs

Outputs

dp_backup_list_iter_end

Ends iteration to list backups.

Inputs

Outputs

dp_backup_list_iter_next

Get next few records in the iteration started by dp-backup-list-iter-start.

Inputs

Outputs

dp_backup_list_iter_start

Starts iteration to list backups of a dataset or a path within dataset.

Inputs

Outputs

dp_backup_mount_start

Mount a backup containing virtual infrastructure objects such as VMs or datastores. A background job will be started for this operation.

Inputs

Outputs

dp_backup_start

Start an unscheduled backup of a dataset. All members or a subset of the dataset will be backed up. A background job will be spawned to backup the dataset.

Inputs

Outputs

dp_backup_unmount_start

Unmount a mounted backup. A background job is started for this operation.

Inputs

Outputs

dp_backup_version_create

Creates a backup version. Backup version is collection of volume snapshots and denotes a single backed up image of a dataset. The management station keeps track of actual volumes that hold the dataset backup using backup versions.

Inputs

Outputs

dp_backup_version_delete

Delete an existing backup version from the database and delete the snapshots from the storage systems for this backup version. The backup version must currently exist and must not be mounted. This API does not provide transaction semantics. When API returns an error, the backup version may have been partially deleted.
Error conditions:

Inputs

Outputs

dp_backup_version_list_iter_end

Ends iteration to list backup versions.

Inputs

Outputs

dp_backup_version_list_iter_next

Get next few records in the iteration started by dp-backup-version-list-iter-start.

Inputs

Outputs

dp_backup_version_list_iter_start

Starts iteration to list all backup versions for a given dataset. Information returned includes the IDs of each instance, the propagation status for each version, and job Id responsible for the backup. Clients should use this API if they want a list of backup versions rather than backup instances.

Inputs

Outputs

dp_backup_version_modify

Modifies a backup version. Either the backup-id or some combination of the dataset-name-or-id, node-name-or-id and backup-version are used to specify an individual backup instance or a set of backup instances which represent the same backup version. Certain inputs, such as is-adding-members, dp-backup-transfer-info and version-members, can only be applied to a single backup instance. When a backup instance is transferred from one node to another, the attributes of the backup instance, backup-description and backup-metadata, are copied at the beginning of the transfer. Any changes to these fields made after a backup instance has been copied will not be propagated automatically. Specify only the dataset-name-or-id and backup-version to update the fields of all the backup instances with the same backup-version.

Inputs

Outputs

dp_get_dataset_backup_jobs_data

Returns a set of jobs to be spawned to backup the dataset. This zapi is used to obtain data that is later used to start an on-demand backup of dataset.

Inputs

Outputs

dp_dashboard_get_dr_dataset_counts

Return number of disaster recovery enabled datasets in all dr-state and dr-status combinations. The number of datasets in each distinct dr-state/dr-status combination is returned.
Error conditions:

Inputs

Outputs

dp_dashboard_get_lagged_datasets

Get a list of the most lagged datasets. This API returns a list of some or all datasets, sorted by lag time. Only the datasets name, ID, and worst relationship lag time are returned.
Error conditions:

Inputs

Outputs

dp_dashboard_get_lagged_relationships

Get a list of the most lagged relationships. This API returns a list of some or all relationships, sorted by lag time. Only the relationship name, ID, and lag time are returned.
Error conditions:

Inputs

Outputs

dp_dashboard_get_protected_data_counts

This zapi call has been deprecated in Juno (4.0) release, because the definitions for "protected" and "unprotected" objects changed. Please use dp-dashboard-get-protected-data-counts-2 instead. Get counts for certain types of objects for displaying on the Protection Manager Dashboard. The types of objects are: datasets, volumes, qtrees, and OSSV directories. For each object type, the number of protected objects, unprotected objects, and ignored objects is returned. And object is considered to be protected if it is a member of a dataset, and a dataset is considered to be protected if it has a protection policy. Objects are unprotected if they are both not protected and not ignored.
Error conditions:

Inputs

Outputs

dp_dashboard_get_protected_data_counts_2

Get counts for certain types of objects for displaying on the Protection Manager Dashboard. The types of objects are: datasets, volumes, qtrees, OSSV directories and application resources. For each object type, the number of protected, unprotected and ignored objects are returned.
An object is considered to be protected if it is:
Error conditions:

Inputs

Outputs

dp_job_abort

Abort a job. A request is sent to abort the job. The job will go into an aborting state and will get aborted after sometime.

Inputs

Outputs

dp_job_cleanup

Purge a specified job from the database. A running job can not be purged. When a job is purged from the database all information is lost.

Inputs

Outputs

dp_job_list_iter_end

Ends iteration to list jobs.

Inputs

Outputs

dp_job_list_iter_next

Get next few records in the iteration started by dp-job-list-iter-start.

Inputs

Outputs

dp_job_list_iter_start

Starts iteration to list jobs. The jobs that match all the specified filters will be returned.

Inputs

Outputs

dp_job_progress_event_list_iter_end

Ends iteration to list progress of job.

Inputs

Outputs

dp_job_progress_event_list_iter_next

Get next few records in the iteration started by dp-job-progress-event-list-iter-start.

Inputs

Outputs

dp_job_progress_event_list_iter_start

Starts iteration to list job progress events. The event could be one of the following type.

Inputs

Outputs

dp_job_purge

Purge all completed jobs from the database. Purged jobs are removed from the database, all information is lost.

Inputs

Outputs

dp_job_schedule_get_last_changed

Gets time when job schedule last changed. This is used by the scheduler service to reload list of jobs that need to run in future. Jobs that are already running are not affected.

Inputs

Outputs

dp_job_schedule_list_iter_end

Ends iteration to list scheduled jobs.

Inputs

Outputs

dp_job_schedule_list_iter_next

Get next few records in the iteration started by dp-job-schedule-list-iter-start.

Inputs

Outputs

dp_job_schedule_list_iter_start

Starts iteration to list jobs that need to be started by scheduler in specified time range. Scheduler service periodically requests a list of scheduled data protection jobs that will need to start within next few hours. Each item in this list is a description of what needs to be done and time when it needs to be done.
For example, a scheduled job might be: take a local snapshot on node 1 of dataset ds1 at 05/10/2006 9:00 AM UTC using "hourly" retention.
Scheduler service is responsible for storing job into the database and starting the job at its start time.

Inputs

Outputs

dp_ossv_application_list_info_iter_end

Terminate an iteration started by dp-ossv-application-list-info-iter-start and clean up any saved info.
Error conditions:

Inputs

Outputs

dp_ossv_application_list_info_iter_next

Returns items from a previous call to dp-ossv-application-list-info-iter-start.

Inputs

Outputs

dp_ossv_application_list_info_iter_start

Browse the application components supported by an OSSV host.

Inputs

Outputs

dp_ossv_application_restore_destination_list_info_iter_end

Terminate an iteration started by dp-ossv-application-restore-destination-list-info-iter-start and clean up any saved info.
Error conditions:

Inputs

Outputs

dp_ossv_application_restore_destination_list_info_iter_next

Returns items from a previous call to dp-ossv-application-restore-destination-list-info-iter-start

Inputs

Outputs

dp_ossv_application_restore_destination_list_info_iter_start

List all the OSSV hosts through which a restore to the original location is possible for a given application.

Inputs

Outputs

dp_ossv_directory_add

Add a new OSSV directory to the list of discovered directories. If the directory already exists, return its object ID.
Error conditions:

Inputs

Outputs

dp_ossv_directory_browse_iter_end

Terminate an iteration started by dp-ossv-directory-browse-iter-start and clean up any saved info.
Error conditions:

Inputs

Outputs

dp_ossv_directory_browse_iter_next

Returns items from a previous call to dp-ossv-directory-browse-iter-start.
Error conditions:

Inputs

Outputs

dp_ossv_directory_browse_iter_start

Get the list of subdirectories beneath a given directory on an OSSV host.
Error conditions:

Inputs

Outputs

dp_ossv_directory_discovered_iter_end

Terminate an iteration started by dp-ossv-directory-discovered-iter-start and clean up any saved info.
Error conditions:

Inputs

Outputs

dp_ossv_directory_discovered_iter_next

Returns items from a previous call to dp-ossv-directory-discovered-iter-start.
Error conditions:

Inputs

Outputs

dp_ossv_directory_discovered_iter_start

List the OSSV directories which have been discovered by the monitor. This list includes OSSV roots, and siblings of directories in backup relationships. The list can be filtered to exclude directories which are marked as "ignored" and to exclude directories which are protected.
Error conditions:

Inputs

Outputs

dp_ossv_directory_modify

Modify a directory's information. If modifying of one property fails, nothing will be changed.
Error conditions:

Inputs

Outputs

dp_ossv_directory_roots_iter_end

Terminate an iteration started by dp-ossv-directory-roots-iter-start and clean up any saved info.
Error conditions:

Inputs

Outputs

dp_ossv_directory_roots_iter_next

Returns items from a previous call to dp-ossv-directory-roots-iter-start.
Error conditions:

Inputs

Outputs

dp_ossv_directory_roots_iter_start

Get the list of filesystem roots from one or more OSSV hosts.
Error conditions:

Inputs

Outputs

dp_policy_copy

Create a new DP policy by copying from an existing "template" policy.

The new policy created using this ZAPI has the same set of nodes and connections as the template policy, and the same property values for each node and connection.


Error conditions:

Inputs

Outputs

dp_policy_destroy

Destroy a DP policy. This removes it from the database.

If the policy has been applied to any datasets, then the destroy operation fails; you must first un-map the policy from any datasets to which it had been applied before you may destroy the policy.

If this ZAPI fails for any reason, the DP policy edit of which it was a part is not rolled back. Instead, the edit is restored to the state it was in prior to invoking this ZAPI.

After this ZAPI successfully completes, any subsequent calls in the same edit session to dp-policy-destroy or dp-policy-modify fail with error code EOBJECTNOTFOUND.


Error conditions:

Inputs

Outputs

dp_policy_edit_begin

Create an edit session and obtain an edit lock on a DP policy to begin modifying the policy.

An edit lock must be obtained before invoking the following APIs:

Use dp-policy-edit-commit to end the edit session and commit the changes to the database.

Use dp-policy-edit-rollback to end the edit session and discard any changes made to the policy.

24 hours after an edit session on a policy begins, any subsequent call to dp-policy-edit-begin for that same policy automatically rolls back the existing edit session and begins a new edit session, just as if the call had used the force option. If there is no such call, the existing edit session simply continues and retains the edit lock.


Error conditions:

Inputs

Outputs

dp_policy_edit_commit

Commit changes made to a DP policy during an edit session into the database.

If all the changes to the policy are performed successfully, the entire edit is committed and the edit lock on the policy is released.

If any of the changes to the policy are not performed successfully, then the edit is rolled back (none of the changes are committed) and the edit lock on the policy is released.

Use the dry-run option to test the commit. Using this option, the changes to the policy are not committed to the database. Instead, a conformance check is performed to return a list of actions that the system would take if the changes were committed by calling this ZAPI without the dry-run option.

If dry-run is false, and all changes are successfully committed, then before the call returns, the system begins a conformance run on all datasets to which the policy has been applied. (See dataset-conform-begin for a description of conformance runs.) If any needed conformance actions require user confirmation, it is assumed that such confirmation has been given, and the actions are performed.


Error conditions:

Inputs

Outputs

dp_policy_edit_rollback

Roll back changes made to a DP policy. The edit lock on the policy will be released after the rollback.
Error conditions:

Inputs

Outputs

dp_policy_get_default_property_values

Returns default values for node and connection properties. These default values are used in calls to dp-policy-modify, in cases where the property is present for a node or connection, but the corresponding optional XML element does not appear in the dp-policy-node-info or dp-policy-connection-info structure.

Note that default values may change from release to release. This ZAPI provides a convenient way to determine the default values for the current release.


Error conditions: None.

Inputs

Outputs

dp_policy_list_iter_end

Terminate a list iteration that had been started by a call to dp-policy-list-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions:

Inputs

Outputs

dp_policy_list_iter_next

Retrieve the next series of policies that are present in a list iteration created by a call to dp-policy-list-iter-start. The server maintains an internal cursor pointing to the last record returned. Subsequent calls to dp-policy-list-iter-next return the next maximum records after the cursor, or all the remaining records, whichever is fewer.

If a property is present for a particular node or connection (the presence or absence of each property is defined in its description), then it always appears in the output element for that node or connection from a call to dp-policy-list-iter-next. For example, the output dp-policy-connection-info element for a backup connection always contains a backup-schedule-name.

If a property is absent for a particular node or connection, then it never appears in the output element for that node or connection. For example, the output dp-policy-connection-info element for a mirror connection never contains a backup-schedule-name.


Error conditions:

Inputs

Outputs

dp_policy_list_iter_start

Begin a list iteration over all content in all DP policies in the system. Optionally, you may iterate over the content of just a single policy.

After calling dp-policy-list-iter-start, you continue the iteration by calling dp-policy-list-iter-next zero or more times, followed by a call to dp-policy-list-iter-end to terminate the iteration.


Error conditions:

Inputs

Outputs

dp_policy_modify

This ZAPI modifies a DP policy by completely replacing the policy's old content with new content specified by input element dp-policy-content. This ZAPI can only change a policy's name, description, and the properties of its nodes and most of the properties of it's connections. The connection property of is-dr-capable cannot be changed once a policy is created. This ZAPI also cannot change the topology (set of nodes and connections between nodes) of a policy; instead, the topology specified in dp-policy-content must match the policy's current topology. At present, there is no way to change the topology of a policy.

If a property is absent for a particular node or connection (the presence or absence of each property is defined in its description), then it is illegal for that property to appear in the input element for that node or connection in a call to dp-policy-modify. For example, it is illegal to specify a backup-schedule-name in a dp-policy-connection element for a mirror connection.


Error conditions:

Inputs

Outputs

dp_relationship_list_info_iter_end

Ends iteration to list data protection relationships.

Inputs

Outputs

dp_relationship_list_info_iter_next

Get next few records in the iteration started by dp-relationship-list-info-iter-start.

Inputs

Outputs

dp_relationship_list_info_iter_start

Starts iteration to list data protection relationships. These are SnapVault or SnapMirror relationships formed in order to implement data protection policy for a dataset. You can list relationships for a single policy connection or for a particular source or destination storage server.

Inputs

Outputs

dp_relationship_modify

Modify settings of a SnapVault, Qtree SnapMirror or Volume SnapMirror relationship.

Inputs

Outputs

dp_restore_to_new_path

Start a restore operation on part of a dataset. This operation copies whole members or its sub-paths of the dataset from a specific backup version to a new location. The operation is performed in the background by a job.
Error conditions:

Inputs

Outputs

dp_restore_to_primary

Start a restore operation on part of a dataset. This operation copies files and/or directories from a specific backup version back to the primary location. The operation is performed in the background by a job.
Error conditions:

Inputs

Outputs

dp_restore_vi_start

Start an operation to restore one or more objects in the virtual infrastructure. This API cannot be used to restore an entire dataset.
Error conditions:

Inputs

Outputs

dp_schedule_content_get

Get the content of a given schedule.

Inputs

Outputs

dp_schedule_create

Create a new schedule with the given name. The schedule type may be daily, weekly, or monthly.

Inputs

Outputs

dp_schedule_daily_add

Create a single snapshot within a daily schedule.

Inputs

Outputs

dp_schedule_daily_delete

Delete a single snapshot within a daily schedule.

Inputs

Outputs

dp_schedule_daily_modify

Modify a single snapshot within a daily schedule. Sample schedules cannot be modified.

Inputs

Outputs

dp_schedule_dependency

Return a list of other DP policies and DP schedules using the specified DP schedule.

Inputs

Outputs

dp_schedule_destroy

Delete a schedule with the given name or ID. A schedule that is used by another schedule(s) may not be deleted and an error will be returned. Sample schedules cannot be destroyed.

Inputs

Outputs

dp_schedule_hourly_add

Create an hourly schedule within a daily schedule. An hourly schedule specifies the frequency of snapshots to be run within the start time and end time.

Inputs

Outputs

dp_schedule_hourly_delete

Delete an hourly schedule within a daily schedule.

Inputs

Outputs

dp_schedule_hourly_modify

Modify an hourly schedule within a daily schedule. Sample schedules cannot be modified.

Inputs

Outputs

dp_schedule_list

List all existing schedule IDs and types.

Inputs

Outputs

dp_schedule_list_info_iter_end

Tell the DFM station that the temporary store associated with the specified tag is no longer necessary

Inputs

Outputs

dp_schedule_list_info_iter_next

Iterate over the list of schedules stored in the temporary store. The DFM internally maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.

Inputs

Outputs

dp_schedule_list_info_iter_start

The dp-dpschedule-list-info-iter-* set of APIs are used to retrieve a list of schedule contents

Inputs

Outputs

dp_schedule_modify

Modify a schedule's details in the database. When the zapi is called, all details within the schedule will be removed and replace by the new details specified in schedule-content. Sample schedules cannot be modified. schedule-id and schedule-type cannot be modified.

Inputs

Outputs

dp_schedule_monthly_add

Specify a single snapshot within a monthly schedule. Either day-of-month, or both week-of-month and day-of-week must be specified.

Inputs

Outputs

dp_schedule_monthly_delete

Delete a single snapshot within a monthly schedule

Inputs

Outputs

dp_schedule_monthly_modify

Modify a single snapshot within a monthly schedule. Sample schedules cannot be modified.

Inputs

Outputs

dp_schedule_monthly_subschedule_set

Specify a sub-schedule to be used by a monthly schedule. On top of the individual monthly events, a monthly schedule may only have 1 daily subschedule OR 1 weekly schedule. If this monthly schedule already has a daily or weekly schedule, this command replaces the old one.

Inputs

Outputs

dp_schedule_monthly_subschedule_unset

Unset a sub-schedule used by a monthly schedule.

Inputs

Outputs

dp_schedule_rename

Rename a schedule. Sample schedules cannot be renamed.

Inputs

Outputs

dp_schedule_weekly_add

Specify a single snapshot within a weekly schedule.

Inputs

Outputs

dp_schedule_weekly_delete

Delete a single snapshot within a weekly schedule.

Inputs

Outputs

dp_schedule_weekly_modify

Modify a single snapshot within a weekly schedule. Sample schedules canot be modified.

Inputs

Outputs

dp_schedule_weekly_subschedule_add

Specify which daily schedule will be used on a certain range of days within a weekly schedule

Inputs

Outputs

dp_schedule_weekly_subschedule_delete

Specify which daily schedule to be deleted within a weekly schedule

Inputs

Outputs

dp_schedule_weekly_subschedule_modify

Specify which daily schedule will be used on a certain range of days within a weekly schedule. Permenent of sample schedules cannot be modified.

Inputs

Outputs

dp_throttle_create

Create a throttle schedule

Inputs

Outputs

dp_throttle_dependency

Return a list of DP policies using the specified DP throttle.

Inputs

Outputs

dp_throttle_destroy

Delete a throttle item. Sample throttle schedules cannot be destroyed.

Inputs

Outputs

dp_throttle_item_add

Add a new throttle item to the throttle schedule

Inputs

Outputs

dp_throttle_item_delete

Delete a throttle item from a throttle schedule

Inputs

Outputs

dp_throttle_list_info_iter_end

Tell the DFM station that the temporary store associated with the specified tag is no longer necessary

Inputs

Outputs

dp_throttle_list_info_iter_next

Iterate over the list of throttle items stored in the temporary store. The DFM internally maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.

Inputs

Outputs

dp_throttle_list_info_iter_start

The dp-throttle-list-info-iter-* set of APIs are used to retrieve a list of throttle items

Inputs

Outputs

dp_throttle_modify

Update a throttle item. When the zapi is called, all details within the throttle schedule will be removed and replace by the new details specified in throttle-content. Sample throttle schedules cannot be modified.

Inputs

Outputs

dp_throttle_rename

Rename a throttle schedule. Sample throttle schedules cannot be renamed.

Inputs

Outputs

event_acknowledge

Acknowledge events. This terminates repeated notifications due to that event.

Inputs

Outputs

event_delete

Delete events. Terminates repeated notifications due to the event.

Inputs

Outputs

event_generate

The event-generate API helps clients to generate events in the DFM system

Inputs

Outputs

event_list_iter_end

event-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the event-list-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

event_list_iter_next

The event-list-iter-next API is used to iterate over the list of events stored in the temporary store created by the event-list-iter-start API. The DFM station, internally, maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.

Inputs

Outputs

event_list_iter_start

List events. The event-list-iter-* set of APIs are used to retrieve the list of events.

The event-list-iter-start API is used to load the list of events into a temporary store. The API returns a tag to temporary store so that subsequent APIs can be used to iterate over the list in the temporary store.

Note that, depending on the input parameters, this API may take up to "timeout" seconds to return. Subsequent calls to event-list-iter-next() will return immediately.

Inputs

Outputs

event_status_change_list_iter_end

event-status-change-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the event-status-change-list-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

event_status_change_list_iter_next

The event-status-change-list-iter-next API is used to iterate over the list of events stored in the temporary store created by the event-status-change-list-iter-start API. The DFM station, internally, maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.

Inputs

Outputs

event_status_change_list_iter_start

List events that had status changes (acknowledged or deleted) within the specified time range.

The event-status-change-list-iter-* set of APIs are used to retrieve the list of events that had status changes.

The event-status-change-list-iter-start API is used to load the list of events into a temporary store. The API returns a tag to temporary store so that subsequent APIs can be used to iterate over the list in the temporary store.

The returned list of events will be sorted according to when they had their status changed (either eventAcked timestamp or eventDeleted timestamp). An event that's both acked and deleted within the requested timeframe would appear twice in the returned list of events, because those would count as 2 status changes, and appear in the returned list based on acked timestamp and deleted timestamp respectively.

Note that, depending on the input parameters, this API may take up to "timeout" seconds to return. Subsequent calls to event-status-change-list-iter-next() will return immediately.

Inputs

Outputs

eventclass_add_custom

Supports adding custom event classes.

Inputs

Outputs

eventclass_delete_custom

Supports deletion of custom event classes.

Inputs

Outputs

eventclass_list

Lists all or a sub-set of the custom event classes.

Inputs

Outputs

eventclass_list_iter_end

The eventclass-list-iter-* set of APIs are used to retrieve the list of event classes. eventclass-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the eventclass-list-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

eventclass_list_iter_next

For more documentation please check eventclass-list-iter-start. The eventclass-list-iter-next API is used to iterate over the members of the event classes stored in the temporary store created by the eventclass-list-iter-start API.

Inputs

Outputs

eventclass_list_iter_start

The eventclass-list-iter-* set of APIs are used to retrieve the list of event classes in DFM. The eventclass-list-iter-start API is used to load the list of event classes into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the event classes in the temporary store. If eventclass-list-iter-start is invoked twice, then two distinct temporary stores are created.

Inputs

Outputs

fcp_target_list_info_iter_end

Ends iteration of targets.

Inputs

Outputs

fcp_target_list_info_iter_next

Get next set of records in the iteration started by call to fcp-target-list-info-iter-start. This zapi will fetch the fcp target info records. The input param 'maximum' specifies the number of records it will show at a time.

Inputs

Outputs

fcp_target_list_info_iter_start

Start iteration of targets.Depending on the input it will return a tag and the number of records to be retrieved.

Inputs

Outputs

graph_data_list_info

Retrieve data of all the graph lines of a graph.

Inputs

Outputs

graph_list_info_iter_end

Terminate a graph list iteration and clean up any saved info.

Inputs

Outputs

graph_list_info_iter_next

Returns items from a previous call to graph-list-info-iter-start

Inputs

Outputs

graph_list_info_iter_start

Initiates a query for a list of graphs and its metadata like graph lines and sample information.

Inputs

Outputs

group_add_member

Add a member to a group.
Error conditions:

Inputs

Outputs

group_copy

Copy the group and all of its subgroups under a new parent group.
Error conditions:

Inputs

Outputs

group_create

Create a new group.
Error conditions:

Inputs

Outputs

group_delete_member

Remove one member from a group.
Error conditions:

Inputs

Outputs

group_destroy

Destroy an existing group. Child groups are destroyed recursively. If there is any error, then no groups are destroyed.
Error conditions:

Inputs

Outputs

group_get_options

Get the options for a group. Option values that are not present indicate that the option is using the global default.
Error conditions:

Inputs

Outputs

group_get_status

Get the status of the group
Error conditions:

Inputs

Outputs

group_is_member_of

Checks if the object associated with the object-id input is a member of the group. This includes both direct and indirect membership. If a group id of 0 is passed, this will always return true as long as the object is a valid object.
Error conditions:

Inputs

Outputs

group_list_iter_end

The group-list-iter-* set of APIs are used to retrieve the members of the DFM global group. group-list-iter-end is used to tell the DFM station that the temporary store used by DFM to support the group-list-iter-next API for the particular tag is no longer necessary.
Error conditions:

Inputs

Outputs

group_list_iter_next

For more documentation please check group-list-iter-start. The group-list-iter-next API is used to iterate over the members of the group stored in the temporary store created by the group-list-iter-start API.
Error conditions:

Inputs

Outputs

group_list_iter_start

The group-list-iter-* set of APIs are used to retrieve the list of DFM groups. By default, a group is listed if the user has DFM.Database.Read capability on the group or if the user has DFM.Database.Read capability on any subgroup of the group.
If rbac-operation is present in the input, then a group is listed, if the authenticated user has the requested capability on that group or if the user has the required capability on any of its sub-groups. In that case, has-privilege output will be false for the parent group and true for the sub-group.
The group-list-iter-start API is used to load the list of groups into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the groups in the temporary store.
If group-list-iter-start is invoked twice, then two distinct temporary stores are created.
Error conditions:

Inputs

Outputs

group_move

Move the group under a new parent group. The id of the group does not change after the move.
Error conditions:

Inputs

Outputs

group_rename

Change the name of a group.
Error conditions:

Inputs

Outputs

group_set_options

Change one or more options for a group. Only options that are specified will be updated. The remaining options will retain their current values. If there is any error, then no options are changed.
Error conditions:

Inputs

Outputs

group_member_list_iter_end

See group-member-list-iter-start for more information. Frees up the resources used by previous call to group-member-list-iter-start.

Inputs

Outputs

group_member_list_iter_next

See group-member-list-iter-start for more information. The group-list-iter-next API is used to iterate over the members of the group stored in the temporary store created by the group-list-iter-start API. The DFM station, internally, maintains a cursor pointing to the last record returned. Subsequent calls to this API will return the records after the cursor up to the specified "maximum" or number of actual records left.

Inputs

Outputs

group_member_list_iter_start

DFM has an object that is known as the "group" that contains other DFM objects. Group may also have subgroups. The group-member-list-* APIs are used to retrieve the members of particular groups. These APIs can be used to retrieve either all members or particular type of members of groups. These APIs don't return the subgroups. Use group-list-iter-start to get a list of subgroups. The group-member-list-iter-start API is used to load the group members into a temporary store. Subsequent group-member-list-iter-next invocations iterate over the records in the temporary store. The group-member-list-iter-end API is used to release the temporary store. If group-member-list-iter-start is invoked twice, then the DFM station will create two different temporary stores that can be accessed using the different tags. When this API is invoked without specifying any groups, type parameter must be specified. In that case, API lists all the objects of specified type that are known to DFM. Those objects may or may not be members of any group. If you specify groups when invoking this API, type parameter is optional. API lists all the objects (optionally, of the specified type) that have been directly added to the specified groups.

Inputs

Outputs

host_add

Add new managed host to the DataFabric Manager. The host being added must be a storage system or an host agent. DFM figures out what type of host we're adding. If it's a storage system or NetCache or FC Switch, we add it to the database and set the appliance-id. If it's a host agent, we add the agent to the database and set the agent-id. On return, only one of the appliance-id or agent-id will be set.

Inputs

Outputs

host_add_license

Add a license to a host. Host must be a storage system. Host must already be present in DFM's database and root login and password for the host must be set in DFM. The DFM will check the list of licenses on the host and update the database when the following types of licenses are changed: If the license is already in use by another host and is not a site license, then the ZAPI will apply the license to the host, and then return with error code ELICENSEINUSE. The ELICENSEINUSE error will not be returned if the optional parameter suppress-inuse-error is true. The ELICENSEINUSE error will not prevent the license from being applied to the host, since it is not the role of the DFM to prevent the user from installing duplicate licenses. The storage system must be running a minimum ONTAP version of 6.5.6.

Inputs

Outputs

host_add_ossv

Add new managed ossv host to the DataFabric Manager. If there is an ossv agent running on the host, we add the ossv agent to the database as a snapvault primary and set the ossv-id.

Inputs

Outputs

host_agent_ossv_service_start

Start the OSSV service on the host agent using the ossv ZAPI. DFM must have valid credentials for the Host Agent. The Host Agent and OSSV Agent must be installed on the host. Valid only for Host Agents. DFM will wait up to the time allowed in the timeout argument to make sure the requested service state was reached. If the timeout is exceeded, we return ESERVICESTATEUNKNOWN.

Inputs

Outputs

host_agent_ossv_service_stop

Stop the OSSV service on the host agent using the ossv ZAPI. DFM must have valid credentials for the Host Agent. The Host Agent and OSSV Agent must be installed on the host. Valid only for Host Agents. DFM will wait up to the time allowed in the timeout argument to make sure the requested service state was reached. If the timeout is exceeded, we return ESERVICESTATEUNKNOWN.

Inputs

Outputs

host_capability_list_iter_end

Terminates a view list iteration and clean up any saved info.

Inputs

Outputs

host_capability_list_iter_next

Returns items from a previous call to host-capability-list-iter-start.

Inputs

Outputs

host_capability_list_iter_start

Initiates a query for a list of allowed capabilities on host. This is applicable for hosts running ONTAP versions 7.0 and above.

Inputs

Outputs

host_create_ndmpuser

Create an NDMP user on the host, creating the user account if necessary and storing the host-encrypted password on dfm. If the user exists already, we generate the encrypted NDMP password for them on the storage system and store that in the database. If it is a new user on the storage system, we will create a new unencrypted password for the caller and use that to generate the encrypted NDMP password which we will then store in the database. If the user is root, we will just use root's unencrypted password as the NDMP password since the encryption requirement does not apply to the root user. New non-root users will be added to the "Backup Administrators" group. Valid only for storage systems.

Inputs

Outputs

host_delete

Deletes a host. A host cannot be deleted in the following cases: A successful deletion of a host will have the following impacts:

Inputs

Outputs

host_domainuser_add

Adds a domain user on the host. This is applicable for hosts running ONTAP versions 7.0 and above.

Inputs

Outputs

host_domainuser_list_iter_end

Terminates a view list iteration and clean up any saved info.

Inputs

Outputs

host_domainuser_list_iter_next

Returns items from a previous call to host-domainuser-list-iter-start.

Inputs

Outputs

host_domainuser_list_iter_start

Initiates a query for a list of domain users on host(s). Domain users on host(s) that matches all filters will be returned. If no input is specified, all the domain users on all monitored storage systems/vFiler units will be returned.

Inputs

Outputs

host_domainuser_push

Pushes a domain user to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the domain user is to be pushed contains usergroups similar to that of the domain user. Two usergroups are similar if they have same name and set of similar roles (roles with same name and same set of capabilities).

Inputs

Outputs

host_domainuser_remove

Removes a domain user from a usergroup or usergroups.

Inputs

Outputs

host_get_defaults

The DFM stores a set of global default values for selected attributes, which are used on all hosts. The administrator can override the values on a per-host basis. This api returns the default values for some attributes returned by host-list-info-iter-next. Default values vary according to the host type.

Inputs

Outputs

host_list_info_iter_end

Ends iteration to list hosts.

Inputs

Outputs

host_list_info_iter_next

Get next few records in the iteration started by host-list-info-iter-start.

Inputs

Outputs

host_list_info_iter_start

Starts iteration to list hosts. The list of hosts can include: Use the filtering criteria in this API to specify the list of hosts returned by host-list-info-iter-next. If no filtering criteria is specified, all non-deleted hosts will be returned by host-list-info-iter-next.

Inputs

Outputs

host_modify

Modify attributes stored in the DFM database of a host managed by the DFM.

Inputs

Outputs

host_modify_agent_credentials

Change the password on the Host Agent for the built in Host Agent management user "admin", using the Operating System credentials specified by os-username and os-password to authenticate the https POST request. If the operation succeeds, update the Host Agent password stored in the DFM database. Valid only for Host Agents.

Inputs

Outputs

host_role_create

Creates a role on the host. This is applicable for hosts running ONTAP versions 7.0 and above.

Inputs

Outputs

host_role_delete

Deletes a role on the host.

Inputs

Outputs

host_role_list_iter_end

Terminates a view list iteration and clean up any saved info.

Inputs

Outputs

host_role_list_iter_next

Returns items from a previous call to host-role-list-iter-start.

Inputs

Outputs

host_role_list_iter_start

Initiates a query for a list of roles on host(s). Roles on host(s) that match all filters will be returned. If no input is specified, all the roles on all monitored storage systems or vFiler units will be returned.

Inputs

Outputs

host_role_modify

Modifies a role on the host.

Inputs

Outputs

host_role_push

Pushes a role to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the role is to be pushed supports all the capabilities present in the role.

Inputs

Outputs

host_set_option

Change the option on the storage system specified by host-option-name to the value specified by host-option-value. If the operation succeeds the following options will be stored in the DFM database and will be returned in the specified elements the next time host-list-info-iter-next is called.
host-option-name host-list-info-iter-next element
ndmpd.enable is-ndmp-enabled
ndmpd.access ndmp-access-specifier
snapvault.enable is-snapvault-enabled
snapvault.access snapvault-access-specifier
snapmirror.enable is-snapmirror-enabled
snapmirror.access snapmirror-access-specifier
If the name of the host option ends in ".access" and the value of the host option is the empty string, the value will be changed to "none". See na_options(1) for a list of option names and values. See na_protocolaccess(8) for access specifier syntax and usage. Valid only for storage systems.

Inputs

Outputs

host_user_add

Creates a local user on the host. This is applicable for hosts running ONTAP versions 7.0 and above.

Inputs

Outputs

host_user_delete

Deletes a local user on the host.

Inputs

Outputs

host_user_list_iter_end

Terminates a view list iteration and clean up any saved info.

Inputs

Outputs

host_user_list_iter_next

Returns items from a previous call to host-user-list-iter-start.

Inputs

Outputs

host_user_list_iter_start

Initiates a query for a list of local users on host(s). Local users on host(s) that matches all filters will be returned. If no input is specified, all the local users on all monitored storage systems or vFiler units will be returned.

Inputs

Outputs

host_user_modify

Modifies local user on the host.

Inputs

Outputs

host_user_modify_password

Modifies password of a local user on the host.

Inputs

Outputs

host_user_push

Pushes a local user to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the local user is to be pushed contains usergroups similar to that of the local user. Two usergroups are similar if they have same name and set of similar roles (roles with same name and same set of capabilities).

Inputs

Outputs

host_usergroup_create

Creates a usergroup on the host. This is applicable for hosts running ONTAP versions 7.0 and above.

Inputs

Outputs

host_usergroup_delete

Deletes a usergroup on the host.

Inputs

Outputs

host_usergroup_list_iter_end

Terminates a view list iteration and clean up any saved info.

Inputs

Outputs

host_usergroup_list_iter_next

Returns items from a previous call to host-usergroup-list-iter-start.

Inputs

Outputs

host_usergroup_list_iter_start

Initiates a query for a list of usergroups on host(s). Usergroups on host(s) that match all filters will be returned. If no input is specified, all the usergroups on all monitored storage systems or vFiler units will be returned.

Inputs

Outputs

host_usergroup_modify

Modifies a usergroup on the host.

Inputs

Outputs

host_usergroup_push

Pushes a usergroup to a host. This is applicable for hosts running ONTAP versions 7.0 and above. The operation succeeds when the host on which the usergroup is to be pushed contains roles similar to that of the usergroup. Two roles are similar if they have the same name and same set of capabilities.

Inputs

Outputs

host_service_authorize

Authorize a newly registered Host Service. Authorizing a Host Service will allow Host Service to start communicating with DataFabric Manager server, until then Host Service will not be operational.

On successful authorization a baseline discovery job is initiated on the Host Service to discover the virtual server inventory information managed by Host Service.

This API requires DFM.HostService.Authorize capability on the Host Service or at global level.

Inputs

Outputs

host_service_configure

Configure options in Host Service registry. Options include storage system and VCenter server credentials and DataFabric Manager server endpoints for communication.

Inputs

Outputs

host_service_discover

Start a discovery job on the Host Service. This will refresh the virtual inventory information cached in the DataFabric Manager database.

Inputs

Outputs

host_service_list_info_iter_end

Ends iteration to list registered Host Services

Inputs

Outputs

host_service_list_info_iter_next

Get the specified Host Service records after the call to host-service-list-info-iter-start.

Inputs

Outputs

host_service_list_info_iter_start

Start iteration to list Host Services registered with DataFabric Manager.

Inputs

Outputs

host_service_modify

Set host service specific options in DataFabric Manager database.

Inputs

Outputs

host_service_package_delete

Delete a host service package from DataFabric Manager.

Inputs

Outputs

host_service_package_list

List host service packages in DataFabric Manager.

Inputs

Outputs

host_service_register

Register a new Host Service with DataFabric Manager Server. Newly deployed Host Services need to be registered and authorized in DataFabric Manager Server, this enables management of virtual server infrastructure hosted on Data ONTAP systems using service catalogs. The registration process retrieves the SSL certificate presented by the Host Service, and sets the Host Service status to "Pending Authorization" state.

On confirming the validity of the certificate host-service-authorize API must be invoked to make the Host Service operational.

Inputs

Outputs

host_service_storage_system_list_info_iter_end

Ends iteration to list storage system for a host service agent.

Inputs

Outputs

host_service_storage_system_list_info_iter_next

Get list of storage systems after the call to host-service-storage-system-list-info-iter-start.

Inputs

Outputs

host_service_storage_system_list_info_iter_start

List storage systems configured in the Host Service registry.

Inputs

Outputs

host_service_unregister

Un-Register a Host Service from DataFabric Manager Server. All the Virtual Server inventory objects will will be marked as deleted once the Host Service is unregistered. The API will fail (with EHS_UNREGISTER_ERROR) in case a Virtual Machine or Datastore (in VMware case) managed by the host service is a member of a dataset.

Inputs

Outputs

ifc_list_info_iter_end

Ends iteration of interfaces.

Inputs

Outputs

ifc_list_info_iter_next

Get next set of records in the iteration started by call to ifc-list-info-iter-start.

Inputs

Outputs

ifc_list_info_iter_start

Start iteration of interfaces.

Inputs

Outputs

ldap_server_add

Add LDAP server to DFM.

Inputs

Outputs

ldap_server_delete

Delete LDAP server from DFM.

Inputs

Outputs

ldap_server_list_info

Returns list of LDAP servers known to DFM.

Inputs

Outputs

lun_destroy

API to destroy a LUN.

Inputs

Outputs

lun_list_info_iter_end

The lun-list-info-iter-* set of APIs are used to retrieve the list of luns. lun-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the lun-list-info-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

lun_list_info_iter_next

For more documentation please check lun-list-info-iter-start. The lun-list-info-iter-next API is used to iterate over the members of the luns stored in the temporary store created by the lun-list-info-iter-start API.

Inputs

Outputs

lun_list_info_iter_start

The lun-list-info-iter-* set of APIs are used to retrieve the list of luns in DFM. lun-list-info-iter-start returns the union of lun objects specified, intersected with rbac-operation. It loads the list of luns into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the luns in the temporary store. If lun-list-info-iter-start is invoked twice, then two distinct temporary stores are created.

Inputs

Outputs

lun_resize

API to resize a LUN.

Inputs

Outputs

migrate_cancel

Cancel the migration operation for the specified vFiler unit or dataset. Error conditions:

Inputs

Outputs

migrate_change_state

Change the state of a migration operation initiated for a vFiler unit or dataset. Only the following state changes are allowed through this API:

Inputs

Outputs

migrate_cleanup

Delete the stale storage associated with a migration operation from the source storage system. This will destroy all the volumes of the source vFiler unit after successful migration of the vFiler unit. Error conditions:

Inputs

Outputs

migrate_complete

Complete the migration operation by doing a cutover from the source vFiler unit to destination vFiler unit. As part of cutover operation the following will be done: A script if specified, will be run in pre mode. The actual cutover will be carried out so that the source vFiler unit is destroyed and data will be served from the destination vFiler unit. The script if specified, will be run in post mode after successful cutover. For all volumes of the vFiler unit, the protection relationships will be migrated to the new destination. All the backup versions will be modified so that they point appropriately to the newly created destination volumes. The migration status is changed to 'migrated' after a successful completion of migration. If migrate-complete fails to cutover to destination storage, then the migration status is changed to 'migrate-failed'. If the cutover to the destination storage succeeds during completion, but some of the subsequent steps like migrating the protection relationships, or copying the history data fails, then the status is changed to 'migrated-with-errors'. Error conditions:

Inputs

Outputs

migrate_fix

Fix the migration status of a dataset. This API should be called if migration job was aborted abnormally, due to server shutdown or machine reboot etc. It fixes the database entries and migration status of the dataset. It can be called only if the last migration job run on the dataset terminated abnormally. Error conditions:

Inputs

Outputs

migrate_rollback

Rollback the previous migrated vFiler unit from the destination vFiler unit to it's original source vFiler unit. As part of rollback operation the following will be done: A script if specified, will be run in pre mode. Rollback will be carried out so that the destination vFiler unit is destroyed and data will be served from the source vFiler unit. The script if specified, will be run in post mode after successful cutover. For all volumes of the vFiler unit, the protection relationships will be migrated to the new destination. All the backup versions will be modified so that they point appropriately to the newly created source volumes. The migration status is changed to 'rolled_back' after a successful rollback. If migrate-rollback fails to cutover to source storage, then the migration status will be changed to 'migrated'. If the cutover to the source storage succeeds during rollback, but some of the subsequent steps like migrating the protection relationships, or copying the history data fails, then the status is changed to 'rolled_back_with_errors'. Error conditions:

Inputs

Outputs

migrate_start

Start the migration operation for a dataset or a vFiler unit. This initiates a baseline transfer for all the volumes in the source vFiler unit and changes the migration status to 'migrating'. When a dataset is given as input then the following conditions should hold good: If any of these conditions are not met, then EMIGRATENOTSUPPORTED is returned. If perform-cutover is set to true, then migrate-complete will be done after migrate-start. Error conditions:

Inputs

Outputs

migrate_update

Update all the SnapMirror relationships of a vFiler unit for which migration operation has been initiated. Error conditions:

Inputs

Outputs

migrate_volume

Migrate one or more volumes from one aggregate to another aggregate on the same or different storage system. Currently this API works only for secondary volumes i.e destinations of a Volume SnapMirror or Qtree SnapMirror relationships or if the volumes are SnapVault secondaries. In addition to the above, the following rules must be satisfied to be able to migrate volumes using this API :- If the destination aggregate is not specified, then the system will automatically select an aggregate, based on the type of the volume, space requirements, provisioning and protection policy configuration associated with the dataset.

Inputs

Outputs

netif_ip_interface_list_info

Get the list of interfaces on a storage system.

Inputs

Outputs

dfm_network_add

Adds a network in DFM to discover.

Inputs

Outputs

dfm_network_delete

Delete a network from DFM so that discovery is disabled for that network. Either network-id or network-address has to be provided. If both are specified then an error(EINVALIDINPUT) will be thrown.

Inputs

Outputs

dfm_network_list_info

Returns list of networks added to DFM for discovery.

Inputs

Outputs

dfm_network_modify

Modify prefix length of a network in DFM, so that host discovery in this networks should happen based on new subnet mask derived from new value of prefix length.

Inputs

Outputs

perf_assoc_view_list_iter_end

Terminate a view list iteration and clean up any saved info by a previous call to perf-assoc-view-list-iter-start

Inputs

Outputs

perf_assoc_view_list_iter_next

Returns objects from a previous call to perf-assoc-view-list-iter-start

Inputs

Outputs

perf_assoc_view_list_iter_start

Initiates a query for a list of performance views.

Inputs

Outputs

perf_client_stats_list_info_iter_end

Ends an iteration started by perf-client-stats-list-info-iter-start

Inputs

Outputs

perf_client_stats_list_info_iter_next

Returns the per-client statistics loaded in a previous call to perf-client-stats-list-info-iter-start

Inputs

Outputs

perf_client_stats_list_info_iter_start

Iterates over the historical data of stored client statistics for storage systems. If no input filters are specified, all the available collections of statistics will be returned.

Inputs

Outputs

perf_client_stats_purge

Remove collected client stats from the database. If no input is specified, all collections for hosts on which the user can perform the DFM.Database.Write operation will be purged.

Inputs

Outputs

perf_collect_client_operation_statistics

Collect per-client NFS and CIFS operation statistics for a storage system. The collection of such statistics will be enabled on all vFiler units on the storage system for a short period of time, and the collected values will then be summarized and returned for the storage system as a whole. This ZAPI is synchronous and will not return an error even if there are errors collecting per-client statistics from some or all vFiler units on the storage system.

Inputs

Outputs

perf_copy_counter_configuration

Propagates source host's current data collection settings to destination hosts. When counter configuration is copied to destination hosts, even if it fails for one host, it proceeds further with the next available host in the destination list. Privilege required is DFM.Database.Write.

Inputs

Outputs

perf_counter_group_create

Creates a counter group. Global DFM.PerfView.Write is required to create historical counter group. To create a real-time counter group Global DFM.PerfView.RealTimeRead is required.

Inputs

Outputs

perf_counter_group_destroy

Destroys a counter group. Privilege to destroy a historical counter group is Global DFM.PerfView.Delete. And to destroy a real-time counter group, Global DFM.PerfView.RealTimeRead is required.

Inputs

Outputs

perf_counter_group_get_data

Retrieve a set of data for a set of data sources from a specific counter group. The data is extracted for the given time interval bounded by start-time and end-time. This API is suitable for extracting the data for a single line in a chart (graph). Privilege required is DFM.Database.Read. For viewing real-time data Global DFM.PerfView.RealTimeRead is also required.

Inputs

Outputs

perf_counter_group_get_dynamic_data_sources

Retrieve a list of top-n data sources for a counter. Privilege required is read.

Inputs

Outputs

perf_counter_group_list_info

Retrieve one or more counter groups. Privilege required is read.

Inputs

Outputs

perf_counter_group_list_iter_end

Terminate a counter group list iteration and clean up any saved info. Privilege required is read.

Inputs

Outputs

perf_counter_group_list_iter_next

Returns items from a previous call to perf-counter-group-list-iter-start. Privilege required is read.

Inputs

Outputs

perf_counter_group_list_iter_start

Initiates a query for a list of performance counter group names. Privilege required is read.

Inputs

Outputs

perf_counter_group_modify

Modify an existing counter group. User can modify the sample-interval, sample-buffer and the counter set of an existing counter group at the individual host level. Modifying a counter set of a counter group means user can selectively enable/disable counters according to the requirement of collecting data. If a counter is disabled data will not be collected for that counter and this gives the flexibility of controlling load on a storage system. This zapi will only enable the counters mentioned in the 'data-sources' part of the input parameter 'perf-counter-group' and rest of the counters of the counter group will be automatically disabled. User can also disable all the counters of a counter group by setting the flag 'is-disable-all' in the input parameter 'perf-counter-group'. Enabling/disabling counters is not allowed for calculated stats counters in default counter groups. Privilege required is DFM.Database.Write.

Inputs

Outputs

perf_counter_group_start

Start data collection for one counter group. Privilege required is Global DFM.PerfView.Write for normal views and Global DFM.PerfView.RealTimeRead for real-time views.

Inputs

Outputs

perf_counter_group_stop

Stop data collection for one counter group. Privilege required is Global DFM.PerfView.Write for normal views and Global DFM.PerfView.RealTimeRead for real-time views.

Inputs

Outputs

perf_diag_troubleshoot

This API shall troubleshoot the DFM object for performance related issues and provides the recommendations to help resolve them.

Inputs

Outputs

perf_disable_data_collection

Disables DFM performance advisor data collection.

Inputs

Outputs

perf_disable_object_update

Disables any modification to DFM performance advisor views, counter groups and object instances.

Inputs

Outputs

perf_enable_data_collection

Enables DFM performance advisor data collection.

Inputs

Outputs

perf_enable_object_update

Enables modifications to DFM performance advisor views, counter groups and object instances.

Inputs

Outputs

perf_get_counter_data

Retrieve data for all the related objects for the given object and the specified performance counters.

Inputs

Outputs

perf_get_counter_dependents

Find out the Operations Manager reports, Performance Advisor custom views, Performance Advisor threshold templates and Performance Advisor thresholds which depend on the specified performance counters.

Inputs

Outputs

perf_get_counter_list_not_in_view

Given an array of performance objects and counters, the API will return a list of all counters that are not part of any view.

Inputs

Outputs

perf_get_default_view

Gets the default view for an object.

Inputs

Outputs

perf_get_server_status

Returns the status of DFM Performance Advisor.

Inputs

Outputs

perf_object_counter_list_info

Retrieve information about a performance object's counters. (No iterator-based equivalent exists for this API because the number of object counters is fewer than two hundred in number.) Privilege required is read.

Inputs

Outputs

perf_object_dependent_counter_list_info

Retrieve information about a performance object's dependent counters. Privilege required is read.

Inputs

Outputs

perf_object_instance_list_iter_end

Terminate an instance list iteration and clean up any saved info. Privilege required is read.

Inputs

Outputs

perf_object_instance_list_iter_next

Returns items from a previous call to perf-object-instance-list-iter-start Privilege required is read.

Inputs

Outputs

perf_object_instance_list_iter_start

Initiates a query for a list of performance object instance names. Privilege required is read.

Inputs

Outputs

perf_object_list_info

Get a list of performance objects. (No iterator-based equivalent exists for this API because the number of performance objects is fewer than twenty in number.) Privilege required is read.

Inputs

Outputs

perf_set_default_view

Sets the default view for an object.

Inputs

Outputs

perf_status_get

Returns the DFM performance advisor status

Inputs

Outputs

perf_threshold_create

Sets threshold values on one or more objects based on a performance counter. Privilege required is DFM.Database.Write. Threshold will be set only if the User has DFM.Database.Write permission over the object specified.

Inputs

Outputs

perf_threshold_create2

Creates one threshold composed of one or more counters, and optionally applies it to an object. The user must have the capability to perform the DFM.Database.Write operation on the object on which the threshold is applied.

Inputs

Outputs

perf_threshold_delete

Removes threshold set on a counter. The user must have the capability to perform the DFM.Database.Delete operation on the object on which the threshold is directly applied.

This ZAPI cannot be used to delete template thresholds, and perf-threshold-template-modify should be used for that purpose.

Inputs

Outputs

perf_threshold_list_info_iter_end

Terminate a counter-thresholds-list iteration and clean up any saved info. Privilege required is DFM.Database.Read. Only thresholds on objects over which the user has DFM.Database.Read permissions will be returned.

Inputs

Outputs

perf_threshold_list_info_iter_next

The perf-threshold-list-info-iter-* list of APIs are used to retrieve the list of all counters on which thresholds have been set. It loads the list of counters on which thresholds have been set into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the threshold related counters in the temporary store. Privilege required is DFM.Database.Read. Only thresholds on objects over which the user has DFM.Database.Read permissions will be returned.

Inputs

Outputs

perf_threshold_list_info_iter_start

The perf-threshold-list-info-iter-* list of APIs are used to retrieve the list of all counters on which thresholds have been set. It loads the list of counters on which thresholds have been set into a temporary store. The API returns a tag that identifies a temporary store so that subsequent APIs can be used to iterate over the threshold related counters in the temporary store. Privilege required is DFM.Database.Read. Only thresholds on objects over which the User has DFM.Database.Read permissions will be returned.

Inputs

Outputs

perf_threshold_list_info2_iter_end

Terminate a perf-thresholds-list-info2 iteration and clean up any saved info.

Inputs

Outputs

perf_threshold_list_info2_iter_next

The perf-threshold-list-info2-iter-* list of APIs are used to retrieve the list of all objects on which thresholds have been set.

Inputs

Outputs

perf_threshold_list_info2_iter_start

The perf-threshold-list-info2-iter-* list of APIs are used to retrieve the list of objects on which thresholds have been set. It loads the list of thresholds into a temporary store. The API returns a tag that identifies the temporary store so that subsequent APIs can be used to iterate over the thresholds in it. The user must have the capability to perform the DFM.Database.Read operation on the object on which the threshold is applied, and only thresholds which are applied on such objects will be returned.

Inputs

Outputs

perf_threshold_modify

Allows modification of threshold value and threshold interval that have been set before. Privilege required is DFM.Database.Write.

Inputs

Outputs

perf_threshold_modify2

Modify an existing threshold. This ZAPI should be used only for changing the parameters of a threshold, not for changing the objects it is not applied on. The objects-info structure is ignored in this ZAPI. This ZAPI cannot be used to modify template thresholds, and perf-threshold-template-modify should be used for that purpose. The user must have the capability to perform the DFM.Database.Write operation on the object on which the threshold is applied.

Inputs

Outputs

perf_threshold_template_attach_objects

Attach one or more objects to a performance template. Either all input objects get attached or none of them get attached. The specified objects will get associated to the applicable thresholds in the template. Objects cannot be attached if the template has no thresholds. Objects already attached to the template are left unchanged. The user must have the capability to perform the DFM.Database.Write operation on the object to which the template is to be attached.

Inputs

Outputs

perf_threshold_template_create

Creates a template for perf thresholds.

Inputs

Outputs

perf_threshold_template_delete

Deletes a template.

Inputs

Outputs

perf_threshold_template_detach_objects

Detaches one or more objects from a performance template. Either all or none of the input objects get detached. The user must have the capability to perform the DFM.Database.Write operation on the object which is to be detached from the template.

Inputs

Outputs

perf_threshold_template_list_info_iter_end

Terminate a perf-threshold-template-list iteration and clean up any saved information.

Inputs

Outputs

perf_threshold_template_list_info_iter_next

The perf-threshold-template-list-info-iter-* list of APIs are used to retrieve the list of all objects on which thresholds have been set.

Inputs

Outputs

perf_threshold_template_list_info_iter_start

The perf-threshold-template-list-info-iter-* list of APIs are used to retrieve the list of all threshold templates that have been created. It loads the templates into a temporary store. The API returns a tag that identifies a temporary store so that subsequent APIs can be used to iterate over the threshold related counters in the temporary store.

Inputs

Outputs

perf_threshold_template_modify

The perf-threshold-template-modify zapi is used to modify an existing threshold template. It can be used to add, remove or modify thresholds in the template or modify attributes of the template

Inputs

Outputs

perf_view_associated_objects_list

List all the objects associated with the view.

Inputs

Outputs

perf_view_create

Create a performance view. A performance view consists of one or more charts, but each view refers to only a single counter group. Global DFM.PerfView.Write RBAC capability is required to create normal views while Global DFM.PerfView.RealTimeRead is required to create real-time view.

Inputs

Outputs

perf_view_destroy

Destroy a performance view. Global DFM.PerfView.Delete is required for destroying normal views while Global DFM.PerfView.RealTimeRead is required for destroying real-time views.

Inputs

Outputs

perf_view_get_data

Retrieve data for a single data-source from a specific performance view. The data is extracted for the given time interval bounded by start-time and end-time.

Inputs

Outputs

perf_view_list_iter_end

Terminate a view list iteration and clean up any saved info. Privilege required is read.

Inputs

Outputs

perf_view_list_iter_next

Retrieve items from a previous call to perf-view-list-iter-start Privilege required is read.

Inputs

Outputs

perf_view_list_iter_start

Initiates a query for a list of performance views. Privilege required is read.

Inputs

Outputs

perf_view_modify

Modify an existing performance view. Global DFM.PerfView.Write RBAC capability is required to modify performance views.

Inputs

Outputs

perf_view_object_association_add

Associates an object with a view

Inputs

Outputs

perf_view_object_association_delete

Removes the association of an object with a view

Inputs

Outputs

provisioning_policy_copy

Create a new provisioning policy by making a copy of an existing policy. The new policy created using this ZAPI has the same set of properties as the existing policy.
Error conditions:

Inputs

Outputs

provisioning_policy_create

This ZAPI creates a new provisioning policy. Error conditions:

Inputs

Outputs

provisioning_policy_destroy

Destroy a provisioning policy. This removes it from the database.

If the policy has been applied to any dataset nodes, then the destroy operation fails; it must first be disassociated from all the dataset nodes to which it has been associated and then destroyed. Error conditions:

Inputs

Outputs

provisioning_policy_edit_begin

Create an edit session and obtain an edit lock on a provisioning policy to begin modifying the policy.

An edit lock must be obtained before invoking provisioning-policy-modify.

Use provisioning-policy-edit-commit to end the edit session and commit the changes to the database.

Use provisioning-policy-edit-rollback to end the edit session and discard any changes made to the policy.

24 hours after an edit session on a policy begins, any subsequent call to provisioning-policy-edit-begin for that same policy automatically rolls back the existing edit session and begins a new edit session, just as if the call had used the force option. If there is no such call, the existing edit session simply continues and retains the edit lock.


Error conditions:

Inputs

Outputs

provisioning_policy_edit_commit

Commit changes made to a provisioning policy during an edit session into the database.

If all the changes to the policy are performed successfully, the entire edit is committed and the edit lock on the policy is released.

If any of the changes to the policy are not performed successfully, then the edit is rolled back (none of the changes are committed) and the edit lock on the policy is released.

Use the dry-run option to test the commit. Using this option, the changes to the policy are not committed to the database.


Error conditions:

Inputs

Outputs

provisioning_policy_edit_rollback

Roll back changes made to a provisioning policy. The edit lock on the policy will be released after the rollback.
Error conditions:

Inputs

Outputs

provisioning_policy_list_iter_end

Terminate a list iteration that had been started by a call to provisioning-policy-list-iter-start. This informs the server that it may now release any resources associated with the temporary store for the list iteration.
Error conditions:

Inputs

Outputs

provisioning_policy_list_iter_next

Retrieve the next series of policies that are present in a list iteration created by a call to provisioning-policy-list-iter-start. The server maintains an internal cursor pointing to the last record returned. Subsequent calls to provisioning-policy-list-iter-next return the next maximum records after the cursor, or all the remaining records, whichever is fewer.
Error conditions:

Inputs

Outputs

provisioning_policy_list_iter_start

Begin a list iteration over all content in all provisioning policies in the system. Optionally, you may iterate over the content of just a single policy.

After calling provisioning-policy-list-iter-start, you continue the iteration by calling provisioning-policy-list-iter-next zero or more times, followed by a call to provisioning-policy-list-iter-end to terminate the iteration.


Error conditions:

Inputs

Outputs

provisioning_policy_modify

This ZAPI modifies the provisioning policy settings of an existing policy in the database with the new values specified in the input. Note: type of provisioning policy cannot be modified after creation. Before modifying the policy, an edit lock has to be obtained on the policy object.
Error conditions:

Inputs

Outputs

qtree_list_info_iter_end

The qtree-list-info-iter-* set of APIs are used to retrieve the list of qtrees. qtree-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the qtree-list-info-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

qtree_list_info_iter_next

For more documentation please check qtree-list-info-iter-start. The qtree-list-info-iter-next API is used to iterate over the members of the Qtrees stored in the temporary store created by the qtree-list-info-iter-start API.

Inputs

Outputs

qtree_list_info_iter_start

The qtree-list-info-iter-* set of APIs are used to retrieve the list of qtrees in DFM. qtree-list-info-iter-start returns the union of qtree objects specified, intersected with is-snapvault-secondary-qtrees, is-in-dataset and rbac-operation. It loads the list of qtrees into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the qtrees in the temporary store. If qtree-list-info-iter-start is invoked twice, then two distinct temporary stores are created.

Inputs

Outputs

qtree_modify

Modify a qtree's information. If modifying of one property fails, nothing will be changed.
Error Conditions:

Inputs

Outputs

qtree_rename

Rename a qtree on a storage system and in the DataFabric Manager database. The new qtree will still be in the same volume as the original qtree. The first step renames the given qtree on the storage system. If that fails, then processing stops and the API emits an EINTERNALERROR to the caller along with the appropriate error message. The second step renames the given qtree on the DFM database. If that fails, then processing stops and the same EINTERNALERROR is emitted back to the caller along with an appropriate error message. There is no retrying or undoing of any of the steps should they fail. The API relies on the DFM monitor to undo the rename automatically. However, the undoing does not happen right away because it depends on the DFM monitor regular update schedule. The DFM monitor periodically ensures that storage system resources are matched in its database -- it updates the database to be consistent with the storage system. Prior to invoking this API, the storage system's login credentials where the qtree resides must be specified in DFM's database using normal DFM procedure.

Inputs

Outputs

qtree_start_monitoring

Start monitoring a previously un-monitored primary qtree from the DataFabric Manager. Error EQTREEMONITORONFAIL means that an attempt to start monitoring the specified qtree failed.

Inputs

Outputs

qtree_stop_monitoring

Stop monitoring a primary qtree from the DataFabric Manager. A qtree that is being managed by an application cannot be stopped being monitored (errno returned will be EQTREEMANAGEDBYAPP)

Inputs

Outputs

rbac_access_check

Checks whether the given admin or usergroup has access to the specified resource. For example, rbac-access-check will return "allow" or "deny" on the following query: Is admin joe allowed to configure storage system, host1.abc.xyz.com, from DFM? One could pass the following as input to answer this question: admin=joe operation=DFM.Event.Read resource=host1.abc.xyz.com In order to prevent an admin from querying everyone's privileges on the system, the system will only allow admins to check their own access by cross-referencing with however they authenticated to the API server. If the admin has Full Control, or has the privilege to query other admin's access, then they will be allowed to make the query. Per software security best practice, this API limits error reporting when access is denied on a particular resource.

Inputs

Outputs

rbac_admin_list_info_iter_end

Ends listing of admins.

Inputs

Outputs

rbac_admin_list_info_iter_next

Returns items from list generated by rbac-admin-list-info-iter-start.

Inputs

Outputs

rbac_admin_list_info_iter_start

Lists all the administrators and their attributes.

Inputs

Outputs

rbac_admin_role_add

Assign an existing role to an existing administrator or usergroup. The administrator effectively gains the capabilities from the role and its inherited roles. As for a usergroup, all members of the usergroup will gain the capabilities assigned to that role and its inherited roles.

Inputs

Outputs

rbac_admin_role_info_list

List the administrators or usergroups assigned to an existing role directly or indirectly. In essence, this API lists the admins or usergroups that have the capabilities of the given role. This API drills into all the possible ways that an admin or usergroup can effectively have the given role. Admins or usergroups are assigned roles indirectly via role inheritance or usergroup assignment (note: a usergroup can be a member of another usergroup). So an admin or usergroup will be listed if any of the following conditions apply: 1. Given role is directly assigned to the admin or usergroup 2. Admin or usergroup has a role directly assigned that inherits given role. 3. Admin or usergroup gains the given role via usergroup membership

Inputs

Outputs

rbac_admin_role_remove

Remove one or more roles from an administrator or usergroup. The admin will no longer have the capabilities gained from the role(s) and its inherited roles. As for a usergroup, the members of the usergroup will no longer have the capabilities gained from the role(s) and its inherited roles. If delete-all is not specified or is FALSE, then role-name-or-id must be specified. If delete-all is TRUE, then all roles assigned to admin will be removed.

Inputs

Outputs

rbac_operation_add

Add a new operation to the RBAC system. An operation is an ability to perform an action on a particular resource type. An operation is tied to a specific application so that different applications are able to manage access control that are specific to them.

Inputs

Outputs

rbac_operation_delete

Delete an existing operation

Inputs

Outputs

rbac_operation_info_list

Get information about an existing operation or all operations in the system.

Inputs

Outputs

rbac_role_add

Add a new role to the RBAC system

Inputs

Outputs

rbac_role_admin_info_list

List the roles assigned to an existing administratror or usergroup. A role is considered assigned to the administrator if that role is gained directly or indirectly via role inheritance or usergroup membership.

Inputs

Outputs

rbac_role_capability_add

Add an existing resource/operation pair to a role. In essence, this adds a capability to a role.

Inputs

Outputs

rbac_role_capability_remove

Remove one or more capabilities (resource/operation pair) from an existing role. If delete-all is TRUE, it removes all capabilities from given role. Otherwise, it removes only the given capability (resource/operation pair). If delete-all is not specified or is FALSE, then operation and resource must be specified.

Inputs

Outputs

rbac_role_delete

Delete an existing role from the RBAC system

Inputs

Outputs

rbac_role_disinherit

Disinherit one or more roles. The effect is that the affected role will no longer have the capabilities gained from the disinherited role(s). If disinherit-all is not specified or is FALSE, then disinherited-role-name-or-id must be specified.

Inputs

Outputs

rbac_role_info_list

Get the operations, capabilities and inherited roles that one or more roles have.

Inputs

Outputs

rbac_role_inherit

Inherit from a role. The effect is that the affected role will gain the capabilities from the inherited role.

Inputs

Outputs

rbac_role_modify

Modify an existing role name and/or its description.

Inputs

Outputs

report_category_list

Api to list report-categories and its details.

Inputs

Outputs

report_designer_list

API to list reports created using report designer.

Inputs

Outputs

report_designer_schedule_add

API to schedule a report. Only reports listed by report-designer-list API can be scheduled using this API. It creates a new schedule based on schedule-content-info and a new report schedule object with the schedule and corresponding report being scheduled

Inputs

Outputs

report_designer_schedule_delete

API to delete a report schedule object and its associated schedule. Only report schedules for reports returned by report-designer-list API can be deleted using this API.

Inputs

Outputs

report_designer_schedule_modify

API to modify a report schedule and/or its schedule details. Only report schedules for reports returned by report-designer-list API can be modified using this API

Inputs

Outputs

report_designer_share

Shares report output of specified report in the specified format to the mentioned email-address-list.

Inputs

Outputs

report_graph_list_info_iter_end

Terminate a view list iteration and clean up any saved info.

Inputs

Outputs

report_graph_list_info_iter_next

Returns items from a previous call to report-graph-list-info-iter-start

Inputs

Outputs

report_graph_list_info_iter_start

Initiates a query for a list of graphs for a particular report.

Inputs

Outputs

report_list_info_iter_end

Terminate a view list iteration and clean up any saved info.

Inputs

Outputs

report_list_info_iter_next

Returns items from a previous call to report-list-info-iter-start

Inputs

Outputs

report_list_info_iter_start

Initiates a query for a list of reports that can be scheduled.

Inputs

Outputs

report_output_delete

Deletes a report output.

Inputs

Outputs

report_output_list_info_iter_end

Terminate a view list iteration and clean up any saved info.

Inputs

Outputs

report_output_list_info_iter_next

Returns items from a previous call to report-output-list-info-iter-start.

Inputs

Outputs

report_output_list_info_iter_start

Initiates a query for a list of report outputs.

Inputs

Outputs

report_output_read

Reads report output data from a file. API will fail if length exceeds 1 MB.

Inputs

Outputs

report_schedule_add

Add a new report schedule.

Inputs

Outputs

report_schedule_delete

Deletes a report schedule.

Inputs

Outputs

report_schedule_disable

Disable a report schedule.

Inputs

Outputs

report_schedule_enable

Enable a report schedule.

Inputs

Outputs

report_schedule_list_info_iter_end

Terminate a view list iteration and clean up any saved info.

Inputs

Outputs

report_schedule_list_info_iter_next

Returns items from a previous call to report-schedule-list-info-iter-start.

Inputs

Outputs

report_schedule_list_info_iter_start

Initiates a query for a list of report schedules.

Inputs

Outputs

report_schedule_modify

Modify a report schedule.

Inputs

Outputs

report_schedule_run

Runs a report schedule at that instant of time.

Inputs

Outputs

resourcepool_add_member

Add member -- storage system or aggregate -- to an existing resource pool.
Error Conditions:

Inputs

Outputs

resourcepool_create

Create a new, empty resource pool.
Error Conditions:

Inputs

Outputs

resourcepool_destroy

Destroy a resource pool. If the resource pool is in use by a storage set or resource pool is not empty, the resource pool may only be destroyed by specifying the force flag. If the resource pool is in use by the storage service, the resource pool can be destroyed only after removing it from the storage service.
Error Conditions:

Inputs

Outputs

resourcepool_get_defaults

Get the default values of attributes defined by this ZAPI set.

Inputs

Outputs

resourcepool_list_info_iter_end

Ends iteration of resource pools.

Inputs

Outputs

resourcepool_list_info_iter_next

Get next records in the iteration started by resourcepool-list-info-iter-start.

Inputs

Outputs

resourcepool_list_info_iter_start

Starts iteration to list resource pools.
Error Conditions:

Inputs

Outputs

resourcepool_member_list_info_iter_end

Ends iteration of resource pools.

Inputs

Outputs

resourcepool_member_list_info_iter_next

Get next records in the iteration started by resourcepool-member-list-info-iter-start.

Inputs

Outputs

resourcepool_member_list_info_iter_start

Starts iteration to list members of specified resource pool.
Error Conditions:

Inputs

Outputs

resourcepool_modify

Modify a resource pool's information. If modifying of one property fails, nothing will be changed.
Error Conditions:

Inputs

Outputs

resourcepool_modify_member

Modify the properties of members of resource pool.
Error Conditions:

Inputs

Outputs

resourcepool_remove_member

Remove member -- storage system or aggregate -- from a resource pool.
Error Conditions:

Inputs

Outputs

resourcepool_update_free_space_status

Check the free space in the resource pool against the nearly-full and full thresholds and generate appropriate events for the resource pool.

Inputs

Outputs

snapshot_get_reclaimable_info

Returns the amount of space that would be freed when a set of Snapshot copies are deleted from a specified volume. This API gets information dynamically from the filer and is a blocking call.

Inputs

Outputs

snapshot_list_info_iter_end

Ends iteration of Snapshot copies.

Inputs

Outputs

snapshot_list_info_iter_next

Retrieve the next records in the iteration started by snapshot-list-info-iter-start.

Inputs

Outputs

snapshot_list_info_iter_start

Returns information on a list of Snapshot copies.

Inputs

Outputs

srm_file_type_add

Add the SRM file type in dfm.

Inputs

Outputs

srm_file_type_delete

Delete the SRM file type in dfm. If any entry in the input is invalid then, none of the file types will be deleted and error will be returned.

Inputs

Outputs

srm_file_type_list_info

Returns all the SRM file types in dfm.

Inputs

Outputs

storage_service_create

Create a new storage service.

Inputs

Outputs

storage_service_dataset_list_iter_end

Ends iteration to list of datasets associated with a storage service.

Inputs

Outputs

storage_service_dataset_list_iter_next

Get next few records in the iteration started by storage-service-dataset-list-info-iter-start.

Inputs

Outputs

storage_service_dataset_list_iter_start

Lists the association of datasets with storage services. If no service/dataset name or id is provided then all datasets with no storage service are listed.

Inputs

Outputs

storage_service_dataset_modify

Attach, detach, or clear a storage service to a dataset.

Inputs

Outputs

storage_service_dataset_provision

Creates a dataset with the specified storage service

Inputs

Outputs

storage_service_destroy

Destroy a storage service. In order to destroy storage service containing datasets, force option must be supplied.

Inputs

Outputs

storage_service_list_info_iter_end

Ends iteration to list storage services.

Inputs

Outputs

storage_service_list_info_iter_next

Get next few records in the iteration started by storage-service-list-info-iter-start.

Inputs

Outputs

storage_service_list_info_iter_start

Starts iteration to list storage services.

Inputs

Outputs

storage_service_modify

Modify attributes for a storage service

Inputs

Outputs

timezone_get_defaults

Retrieves the default time zone settings, including the time zone in which the server runs and the time zone to use for objects that don't specify their own time zone.

Inputs

Outputs

timezone_list_info_iter_end

Ends iteration to list time zones.

Inputs

Outputs

timezone_list_info_iter_next

Get the next few records in the iteration started by timezone-list-info-iter-start.

Inputs

Outputs

timezone_list_info_iter_start

Starts iteration to list time zones from the internal database of time zone information.

Inputs

Outputs

timezone_validate

Determines if a time zone specification is valid. The specification may be the name of timezone returned from timezone-list-info-iter-next(), or can be a POSIX-style time zone specification.

Inputs

Outputs

user_favorite_report_delete

Deletes favorite reports of a user or all users.

Inputs

Outputs

user_favorite_report_list_info_iter_end

The user-favorite-report-list-info-iter-* set of APIs are used to retrieve the list of user favorite counts. user-favorite-report-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the user-favorite-report-list-info-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

user_favorite_report_list_info_iter_next

For more documentation please check user-favorite-report-list-info-iter-start. The user-favorite-report-list-info-iter-next API is used to iterate over the members of user favorite report counts stored in the temporary store created by the user-favorite-report-list-info-iter-start API.

Inputs

Outputs

user_favorite_report_list_info_iter_start

The user-favorite-report-list-info-iter-* set of APIs are used to retrieve the number of favorite reports of specified user or all users. It loads the list of user favorite report counts into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the list in the temporary store. If user-favorite-report-list-info-iter-start is invoked twice, then two distinct temporary stores are created.

Inputs

Outputs

user_recent_report_delete

Deletes recently viewed reports of a user or all users.

Inputs

Outputs

user_recent_report_list_info_iter_end

The user-recent-report-list-info-iter-* set of APIs are used to retrieve the list of user's recently viewed report counts. user-recent-report-list-info-iter-end is used to tell the DFM station that the temporary store used by DFM to support the user-recent-report-list-info-iter-next API for the particular tag is no longer necessary.

Inputs

Outputs

user_recent_report_list_info_iter_next

For more documentation please check user-recent-report-list-info-iter-start. The user-recent-report-list-info-iter-next API is used to iterate over the members of user's recently viewed report counts stored in the temporary store created by the user-recent-report-list-info-iter-start API.

Inputs

Outputs

user_recent_report_list_info_iter_start

The user-recent-report-list-info-iter-* set of APIs are used to retrieve the number of recently viewed report count of specified user or all users. It loads the list of user's recently viewed report counts into a temporary store. The API returns a tag that identifies that temporary store so that subsequent APIs can be used to iterate over the list in the temporary store. If user-recent-report-list-info-iter-start is invoked twice, then two distinct temporary stores are created.

Inputs

Outputs

vfiler_create

Create a new vFiler on a storage system. A vFiler can be created by either: Error conditions:

Inputs

Outputs

vfiler_destroy

Destroy a vfiler. This API stops and then destroys the vFiler on the hosting filer and marks the vFiler as deleted in DFM. Storage resources owned by the vFiler are not destroyed. They will be owned by the hosting filer after the vFiler is destroyed. Error conditions:

Inputs

Outputs

vfiler_setup

Configure and setup a vFiler based on a specified vFiler template. Depending on the input a CIFS setup will also be done on the vFiler. Error conditions:

Inputs

Outputs

vfiler_template_copy

Creates a new vFiler template by copying all settings from an existing vFiler template. Error conditions:

Inputs

Outputs

vfiler_template_create

Creates a vFiler template. A vFiler template contains configuration information that is used during vFiler setup. Error conditions:

Inputs

Outputs

vfiler_template_delete

Deletes the vFiler template. Error conditions:

Inputs

Outputs

vfiler_template_list_info_iter_end

Ends iteration of vFiler templates.

Inputs

Outputs

vfiler_template_list_info_iter_next

Get next records in the iteration started by vfiler-template-list-info-iter-start.

Inputs

Outputs

vfiler_template_list_info_iter_start

Lists vFiler templates. Error conditions:

Inputs

Outputs

vfiler_template_modify

Modifies the settings in a vFiler template. Error conditions:

Inputs

Outputs

vi_datacenter_list_info_iter_end

Ends iteration to list Data Centers.

Inputs

Outputs

vi_datacenter_list_info_iter_next

Get the list of Data Center records.

Inputs

Outputs

vi_datacenter_list_info_iter_start

List Data Centers discovered in DataFabric Manager Server.

Inputs

Outputs

vi_datastore_list_info_iter_end

Ends iteration to list datastores

Inputs

Outputs

vi_datastore_list_info_iter_next

Get the list of Datastore records.

Inputs

Outputs

vi_datastore_list_info_iter_start

List Datastores discovered in DataFabric Manager Server.

Inputs

Outputs

vi_hypervisor_list_info_iter_end

Ends iteration to list hypervisors.

Inputs

Outputs

vi_hypervisor_list_info_iter_next

Get the list of Hypervisor records.

Inputs

Outputs

vi_hypervisor_list_info_iter_start

Start iteration of Hypervisors discovered in DataFabric Manager server.

Inputs

Outputs

vi_virtual_center_list_info_iter_end

Ends iteration to list Virtual Centers.

Inputs

Outputs

vi_virtual_center_list_info_iter_next

Get list of Virtual Center records..

Inputs

Outputs

vi_virtual_center_list_info_iter_start

List Virtual Center Servers discovered in DataFabric Manager Server.

Inputs

Outputs

vi_virtual_disk_list_info_iter_end

Ends iteration to list virtual disks

Inputs

Outputs

vi_virtual_disk_list_info_iter_next

Get the list of Virtual Disk records.

Inputs

Outputs

vi_virtual_disk_list_info_iter_start

List Virtual Disks discovered in DataFabric Manager Server.

Inputs

Outputs

vi_virtual_machine_list_info_iter_end

Ends iteration to list Virtual Machines.

Inputs

Outputs

vi_virtual_machine_list_info_iter_next

Get the list of virtual machines records.

Inputs

Outputs

vi_virtual_machine_list_info_iter_start

Start iteration of Virtual Machines discovered in DataFabric Manager server.

Inputs

Outputs

volume_dedupe

Dedupe a volume.

Inputs

Outputs

volume_destroy

Destroy a volume.

Inputs

Outputs

volume_list_info_iter_end

Ends iteration to list volumes.

Inputs

Outputs

volume_list_info_iter_next

Get next few records in the iteration started by volume-list-info-iter-start.

Inputs

Outputs

volume_list_info_iter_start

Starts iteration to list volumes.

Inputs

Outputs

volume_modify

Modify a volume's information. If modifying of one property fails, nothing will be changed.
Error Conditions:

Inputs

Outputs

volume_resize

Resize a volume. The size related charecteristics of a volume like total size, snap reserve or maximum size of a volume can be modified using this API.

Inputs

Outputs


TYPEDEFS

aggregate-info

Information about a aggregate.

Fields

aggregate-size

Sizes of various parameters of an aggregate.

Fields

aggregate-space-info

Information about space charecteristics of an aggregate.

Fields

backup-deletion-request-info

Details of backup to be deleted.

Fields

backup-id

Identifier of the backup instance to be deleted. Range: [1..2^31-1]

Fields

cleanup-stale-storage

Indicates when the volumes in the source aggregate should be destroyed after successful migration. Possible values are:

Fields

dedupe-request-info

Deduplication request details

Fields

object-management-interface

Specify the management interface of ONTAP that provides complete management for the object i.e. ONTAP CLIs, SNMP, ONTAPI etc. Possible values are:

Fields

resize-volume-request-info

Details of the resize request. Atleast one of new-size, maximum-capacity or snap-reserve needs to be specified.

Fields

snapshot-deletion-request-info

Details of snapshot deletion request

Fields

space-management-op-type

Type of space management operation Possible values:

Fields

space-management-operation-info

Details of a space management operation. Only one of the request info in space-management-operation-info should be sent in input.

Fields

space-management-result-info

Results of the space management session.

Fields

volume-migration-request-info

Details of volume migration request

Fields

admin-name

The name of a DFM administrator to receive alarm notifications.

Fields

alarm-defaults

The default values of the attributes defined by this ZAPI.

Fields

alarm-info

Information about a single alarm. This structure is used in three places: creating new alarms, modifying alarms, and listing alarms.

Fields

email-address

A single email address. It cannot contain spaces, semi-colons or unprintable characters.

Fields

trap-destination

Destination parameters for a SNMP trap.

Fields

request

One API request.

Fields

response

One API response.

Fields

application-backup-operation-info

Specifies schedules and settings for the backup operation.

Fields

application-backup-schedule-info

One or more application specific operational values, schedules and associated backup retention type created by the schedule.

Fields

application-policy-info

Contains information about a single application policy.

Fields

application-policy-settings-info

Policy level settings

Fields

application-policy-type

Type of an application policy. Possible values are: 'hyperv', 'vmware'.

Fields

day-of-month

Day of month. If day-of-month is 29, 30, or 31, it will be interpreted as the last day of the month for months with fewer than that many days. Range: [1..31]

Fields

day-of-week

Day of week. Range: [0..6] (0 = "Sunday")

Fields

hyperv-backup-schedule-settings-info

Schedule level settings for the Hyper-V backup operation.

Fields

simple-schedule-info

Simple schedule.

Fields

time-of-day-info

Time of day.

Fields

times-of-day-info

Times of day on which the scheduled event occurs.

Fields

vmware-backup-schedule-settings-info

Schedule level settings for the VMware backup operation.

Fields

cifs-domain-info

Details of the cifs domain.

Fields

client-option-info

Single name/value pair.

Fields

comment-field-id

Identifier of the comment field. Range: [1..2^31-1]

Fields

comment-field-info

Information about the comment field.

Fields

comment-field-name

Name of the comment field. Name can contain a maximum of 255 characters.

Fields

comment-field-name-or-id

Name or identifier of the comment field. It must conform to one of the following formats:

Fields

comment-field-object-type

Object type of the comment field. Name can contain a maximum of 64 characters. Possible input values are 'Dataset', 'DP Policy', 'Prov Policy'.

Fields

comment-field-value

A single comment field value.

Fields

comment-value

Value of the specified comment field to be set on the object. This can be of maximum 1024 characters.

Fields

application-info

Information about an application.

Fields

application-resource-namespace

Name of the application resource namespace. We cannot list all possible values for this type. Because, in theory, someone could write a new plugin and install it under a Host Service at run-time, and we will support such a plugin from the server side in future. Client should depend on the specific values only when they are writing plugin-specific code.

Fields

cifs-permission

Permission granted to a CIFS user on a share. Possible values are 'no_access', 'read', 'change' and 'full_control'

Fields

cifs-share-info

Specifies the properties of a CIFS share on Data ONTPAP system.

Fields

cifs-share-name

CIFS share name of the storage container (volumes, qtrees). Example \\server\sharename.

Fields

cifs-share-permission

CIFS ACL information configured for a CIFS share.

Fields

conformance-alert

A description of one conformance check alert that should be noted by a user.

Fields

conformance-alert-type

The type of alert that is being issued is indicated by a field of this type. The alert-type may apply to the conformance-alerts.

Possible values are: 'diskspace' and 'rebaseline'.

Please see conformance-alerts where the above values may be used for additional information.

Fields

conformance-run-result

A description of one action and the predicted effects of taking that action.

Fields

dataset-access-details

The data access details of a dataset. Currently, this is used to configure a dataset in a way that it is capable of transparent migration. A dataset can be configured and provisioned in a way that it is capable of transparent migration. IP Address and netmask has to be configured on the dataset in order to make the dataset capable of transparent migration. Provisioning Manager will create a vFiler unit with this IP Address and all the storage provisioned for this dataset will be accessed from this vFiler unit or this IP address. As input, it is valid only when: - There are no members in the dataset. - There is no vFiler unit attached to the dataset.

Fields

dataset-cifs-export-setting

If storage inside this dataset node is exported over CIFS, this specifies some of the common properties that will be applied to all such provisioned storage.

Fields

dataset-cifs-share-permission

A single permission granted to a user on CIFS shares within a dataset node.

Fields

dataset-dynamic-reference-info

Information about one dynamic reference in a dataset.

Fields

dataset-dynamic-reference-parameter

Information about one dynamic reference in a dataset.

Fields

dataset-export-info

Specifies the NAS or SAN export settings for a dataset node.

Fields

dataset-export-protocol

Specifies the export protocols that members in this dataset node will be exported over. Possible values are 'nfs', 'cifs', 'mixed', 'fcp', 'iscsi' and 'none'. Specifying 'mixed' implies that the storage in the dataset should be exported over both NFS and CIFS.

Fields

dataset-info

Information about a dataset.

Fields

dataset-io-metric-info

A single I/O usage measurement record for a dataset node.

Fields

dataset-lun-mapping-info

Specifies the hosts and/or initiators to which LUNs in this dataset node will be mapped, and the igroup OS type.

Fields

dataset-map-entry

Map a storage set to a policy node. The root node is not included in the map.

Fields

dataset-member-info

Information about one member of a dataset.

Fields

dataset-member-parameter

Information about one member of a dataset.

Fields

dataset-migration-info

Dataset's migration information.

Fields

dataset-nfs-export-setting

If storage inside this dataset node is exported over NFS, this specifies some of the common properties that will be applied to all such provisioned storage.

Fields

dataset-node-info

Information about a dataset node.

Fields

dataset-resourcepool-info

Information about resource pools associated with dataset.

Fields

dataset-space-metric-info

A single space usage measurement record for a dataset node.

Fields

dataset-status

The complex status of the dataset at the time it was last re-calculated.

Fields

dedupe-member-request-info

Details of the dedupe member request.

Fields

delete-snapshots-request-info

Details on which Snapshot copies are to be deleted.

Fields

dfm-metadata-field

Named field in the metadata.

Fields

dr-state

Disaster recovery states of a dataset. Possible values are:

Fields

dr-status

The DR status has one of the following values:

The DR status is computed based on all of the following:

If the dataset list iteration's output element is-dr-capable is true, then this element appears in the output. Otherwise, if is-dr-capable is false, this element does not appear in the output.

Fields

dr-status-problem

An explanation of a single problem that causes the dataset not to have a dr-status of "normal".

Fields

dry-run-result

A description of one action and the predicted effects of taking that action.

Fields

excluded-object-info

Information about a datastore in a dataset.

Fields

initiator-id

For FCP initiators, this is a string composed of 16 hexadecimal digits. For iSCSI initiators, this is an initiator name in the dotted-domain notation.

Fields

io-measurement

Dataset node's I/O usage measurements.

Fields

job-info

Job information of provisioning request.

Fields

lun-igroup-info

If the LUN has been mapped to any igroups or hosts, this structure contains information relevant to the same.

Fields

lun-mapping-initiator

Specifies an initiator to map LUNs in the dataset node to, and, optionally, the host to which it belongs.

Fields

metric-dataset-node-info

Information about the dataset node

Fields

migration-status

Migration status for an ongoing vFiler unit migration. Possible values: "in_progress", "migrating", "migrated", "migrated_with_errors", "migrate_failed", "rolled_back", "rolled_back_with_errors".

Fields

missing-member-info

Information about one missing member of a dataset.

Fields

nfs-export-host

Specifies a single host or all-hosts. A host can be a host name (DNS name), an IP address, a subnet or a domain name. If an 'all-hosts' entry is present, all other host name entries must be negated.

Fields

nfs-export-rules

Specifies the NFS security configuration options provided by Data ONTAP operating system.

Fields

nfs-security-flavor

A security flavor to be applied on nfs exports. Possible values are 'none', 'sys' (Unix Style), 'krb5' (Kerberos v5) , krb5i' (Kerberos v5 integrity) and 'krb5p' (Kerberos v5 privacy). Default value is 'sys'.

Fields

primary-volume-name-format

Name format string for the primary volumes created by Provisioning Manager. Allowed values for this element "" (empty string) OR free form text with substitution tokens. The allowed tokens are:
  1. %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
  2. %D: Dataset name.
For configurable naming using %L and %D, users can use all the attributes or some of them and they can order them anyway they want combined by user defined text e.g. %L_%D. Order of the attributes can change for example. %D_%L.

Fields

protection-status-job-detail

Details about a job that led to a protection-status-problem

Fields

protection-status-problem

Information about a problem that resulted in the dataset to not have protection-status as "protected".

Fields

provision-member-comment

One comment field to fill in as part of the request

Fields

provision-member-request-info

Details specific to the 'provision_member' request.

Fields

replace-result

A description of one replace action.

Fields

requires-non-disruptive-restore

When this attribute is set, the dataset is configured to allow LUNs to be restored from backups in such a way that the host need not detach from the LUN.

Configuring the dataset for non-disruptive restore does not guarantee that all backup instances may be restored non-disruptively. It only applies to backup instances reached through a Backup connection. The caller must check the supports-non-disruptive-restore output element returned by dp-backup-list-info-next to tell whether a given backup instance can be restored non-disruptively.

Specifically, when this attribute is set:

If any of these constraints is not met when adding a member to the dataset, the dataset will not be in conformance and the addition will generate conformance errors. Addition of members with ONTAP version less than 7.3 to the primary node of the dataset will fail with errors.

requires-nondisruptive-restore may only be specified if:

Thus, if requires-non-disruptive-restore is specified as true, is-application-data must also be present and true. Callers may check the non-disruptive-restore-compatible output element of the dp-policy-list-info-iter-next API to tell which policies may be applied to datasets requiring non-disruptive restores.

Fields

resize-member-request-info

Details of the resize request.

Fields

result-detail

Details on a specific action that adds information intended to explain more about a higher level result. For example, the detail may be used to explain a dry-run result by explaining why resources were not selected.

Fields

secondary-qtree-name-format

Name format string for the secondary qtrees created by Protection Manager. This string is free form text with substitution tokens. The allowed substitution tokens are:
  1. %Q: Primary qtree name.
  2. %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
  3. %S: Primary storage system name. The name of the storage system of the source volume in the primary root node to which the current secondary volume has a relationship with.
  4. %V Primary volume name. It is the name of the source volume in the primary root node to which the current secondary volume has a relationship with.
Users can use any or all of the substitution tokens in any order.

Fields

secondary-volume-name-format

Format string used to generate names for secondary volumes created by Protection Manager. Allowed values for this element "" (empty string) OR free form text with substitution tokens. The allowed tokens are
  1. %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
  2. %S: Primary storage system name. The name of the storage system of the source volume in the primary root node to which the current secondary volume has a relationship with.
  3. %V: Primary volume name. It is the name of the source volume in the primary root node to which the current secondary volume has a relationship with.
  4. %C: Type of connection. It has only two values, "mirror" or "backup"
For configurable naming using %L, %C, %S and %V, users can use all the attributes or some of them and they can order them anyway they want combined by user defined text e.g. %L_%C_%S_%V. Order of the attributes can change for example. %C_%S_%L.

Fields

snapshot-name-format

Format string for creating snapshot names. This string is free form text with substitution tokens. Allowed substitution tokens are:
  1. %T: Scheduled creation time of snapshot in format yyyy-mm-dd_hh.mm.ss. The time is in the dataset time zone and if no time zone is assigned to the dataset, then default system time zone will be used. is the offset from utc in standard format (+hhmm or ?hhmm). For example, "-0430" means 4 hours 30 minutes behind UTC (west of Greenwich). If the timezone is "UTC" or "GMT", no offset is included in the name.
  2. %R: Retention type of snapshot. Values are "hourly", "daily", "weekly" "monthly" or "unlimited".
  3. %L: Dataset label (dl), previously called the dataset prefix. This is the volume-qtree-name-prefix element specified in dataset-create or dataset-modify. If the dataset label is not set, substitute the dataset name instead.
  4. %H: Name of storage system which owns the volume which will contain the snapshot.
  5. %N: Name of volume which will contain the snapshot.
  6. %A: Application-specific data. When included, allow the utility, creating the snapshot name for the application, to embed any application specific data that the utility wants in the snapshot name. User has no control over what goes in this field.
Format string can include any or all substitution tokens in any order. %T is mandatory.

Fields

space-condition

Represents a space condition of a dataset member.

Fields

space-info

Specifies the total, available and used space in a dataset member, the space status and various space related conditions of the member.

Fields

space-measurement

Dataset node's space usage measurements.

Fields

space-status

Worst space status of a dataset or a member. Possible values are: 'unknown', 'ok', 'warning', 'error'

Fields

undedupe-member-request-info

Details of the undedupe member request.

Fields

api-stat

Statistics about a single server API

Fields

backup-type

Type of DFM database backup. Possible values are 'archive' and 'snapshot'. The 'archive' type backups are used when the data resides on a conventional disk. The 'snapshot' type backups are used for data residing on a LUN.

Fields

child-count

Count of children of one type for the specified object. An element of this type is returned if count is 1 or more.

Fields

dfm-diag-counter-group

Descriptive diagnostic information about a counter group.

Fields

dfm-monitoring-timestamp

The timestamp of last monitoring done for a monitor on various objects of DFM.

Fields

directory-info

Lists the information about DFM directories. These directories include Performance Advisor data directory, Database Backup directory, Data Export directory and Reports Archival directory.

Fields

email-address-list

A list of email addresses. If more than one email address is specified, the addresses must be seperated by a ','. Spaces (and other white space) are not allowed, either between addresses or in them. Unprintable characters are also invalid. The list may contain up to 255 characters.

Fields

feature

Information about a licensed feature.

Fields

file-system-block-type

Block Type of the file system. The volumes on both the source and destination sides of a SnapMirror relationship must be of the same block type. Volumes contained in a larger parent agregate may have a block-type of 64_bit. For upgraded systems it is possible that this value may be unknown until the system can determine the block-type. Possible values are:

Fields

host-service-unique-id

Identifier generated by the host service that is unique within the host service.

Fields

ip-address

IP address in string format. The length of this string cannot be more than 40 characters.

Fields

key-value-pair

The key/value for a generic object attribute.

Fields

monitor-name

Specifies name of a monitor to be scheduled to run. Possible values are as follows:

Fields

network-address

IP address of the network or host

Fields

network-snmp-credential-info

SNMP credential information of a network or host.

Fields

obj-full-name

Full name of a DFM object. This typedef is an alias for the builtin ZAPI type string. An object full name conforms to all the rules of an obj-name, except that the full name may be up to 255 characters long.

DFM creates full names by concatenating an object name with any parent object names, so as to create a unique name for an object. The format of full names is as follows:

For any DFM object not listed above, the obj-name and obj-full-name are identical.

Fields

obj-id

Identification number (ID) for a DFM object. This typedef is an alias for the builtin ZAPI type integer. Object IDs are unsigned integers in the range [1..2^31 - 1]. In some contexts, an object ID is also allowed to be 0, which is interpreted as a null value, e.g., a reference to no object at all.

The ID for a DFM object is always assigned by the system; the user is never allowed to assign an ID to an object. Therefore, an input element of type obj-id is always used to refer to an existing object by its ID. The ZAPI must specify the object's DFM object type (e.g. dataset, host, DP policy, etc.). Some ZAPIs allow the object to be one of several different types.

If the value of an obj-id input element does not match the ID of any existing DFM object of the specified type or types, then typically the ZAPI fails with error code EOBJECTNOTFOUND. A ZAPI may deviate from this general rule, for example, it may return a more specific error code. In either case, the ZAPI specification must document its behavior.

Fields

obj-name

Name of a DFM object. This typedef is an alias for the built in ZAPI type string. An object name must conform to the following format: The behavior of a ZAPI when it encounters an error involving an obj-name input element depends on how the ZAPI uses the input element. Here are the general rules: A ZAPI may deviate from these general rules, for example, it may return more specific error codes. In such cases, the ZAPI specification must document its behavior.

If an input name element is used to refer to an existing object, then the ZAPI specification must specify which DFM object type (e.g. data set, host, DP policy, etc.) is allowed. Some ZAPIs allow the object to be one of several different types. See the description of obj-full-name for examples of valid input formats.

Note that there is no requirement that all object names must be unique. However, the names for some specific types of objects are constrained such that no two objects of that type may have the same name. For example, this constraint applies to datasets, DP schedules, and DP policies. This means that no two datasets may have the same name, but a dataset may have the same name as a DP schedule or DP policy.

In general, object names are compared in a case-insensitive manner. This means that, for example, "MyObject" and "MYOBJECT" are considered to be the same name for purposes of: creating new objects, renaming existing objects, or looking up an object by name. On the other hand, ZAPIs that return an obj-name generally do not change the capitalization at all. For example, if an object's name has been set to "MyObject", then list iteration ZAPIs that return the object's name return it as "MyObject" rather than "MYOBJECT" or "myobject".

ZAPIs that operate on obj-name values and do not follow these general rules about case sensitivity must document the rules that they do follow.

One important exception to these general rules is that volumes, qtrees, OSSV directories, SRM paths, interfaces, FCP targets and FC switch ports all have case-sensitive names. When looking up objects of these types by name, the case must match the object name.

Fields

obj-name-or-id

Name or internal ID of a DFM object. This typedef is an alias for the builtin ZAPI type string. An obj-name-or-id must contain between 1 and 64 characters, and must conform to one of the following formats: Elements of type obj-name-or-id are used only as inputs to ZAPIs. The value must match either the name or internal ID of an existing DFM object. The ZAPI must specify the object's DFM object type (e.g. dataset, host, DP policy, etc.). Some ZAPIs allow the object to be one of several different types.

If the format of an obj-name-or-id input element does not conform, or the value does not match the name or ID of an existing object, then generally the ZAPI documents that it fails with error code EOBJECTNOTFOUND. A ZAPI may return more specific error codes. In such cases, the ZAPI specification must document its behavior.

If a ZAPI can accept a null value (e.g. reference to no object at all) for such an element, then the element is declared optional, and the absence of the input element represents a null value.

Fields

obj-status

A status value which can be associated with a DFM object. This typedef is an alias for the builtin ZAPI type string. The severity associated with an event has this type.

Possible values are: 'unknown', 'normal', 'information', 'unmanaged' 'warning', 'error', 'critical', 'emergency'.

In some contexts, it is important that severities are ordered (as above). For example, an alarm might be triggered if an event with a given severity "or worse" occurs. In this example, worse means "after" in the list above.

Fields

obj-status-info

Status of the dfm object

Fields

parent-dataset

Information about one dataset. A parent dataset is returned if the authenticated user has DFM.Database.Read privilege on the dataset.

Fields

parent-group

Information about one resource group. A parent group is returned if the user has DFM.Database.Read privilege on the group.

Fields

parent-object

Information about one parent object. A parent object is returned if the authenticated user has DFM.Database.Read privilege on that object.

Fields

parent-resourcepool

Information about one resource pool. A parent resource pool is returned if the authenticated user has DFM.Database.Read privilege on that resource pool.

Fields

plugin

Installed plugins.

Fields

prefix-length

Prefix length of the network or host

Fields

resource-property-values

Provides information on resource properties and the possible values

Fields

snmp-authentication-protocol

It represents the protocol used for authorization in snmp version 3. Possible values are:

Fields

snmp-id

null

Fields

snmp-version

Represents snmp version that will be used for discovering network. Possible values are:

Fields

daily-info

The attributes of a daily schedule. May only be used in a daily schedule.

Fields

hourly-info

The attributes of an hourly schedule. May only be used in a daily schedule.

Fields

monthly-info

The attributes of a monthly schedule. May only be used in a monthly schedule.

Fields

monthly-subschedule-info

The attributes of a monthly subschedule. May only be used in a monthly schedule.

Fields

schedule-assignee

Description of a DFM object using the schedule.

Fields

schedule-content-info

Detailed schedule contents. A schedule can be a daily, weekly, or monthly schedule. A daily schedule may include both multiple recurring hourly schedules (hourly-list element) and/or individual non-recurring daily schedules (daily-list element). A weekly schedule may include multiple references to daily schedules to be run on specific days of the week (weekly-subschedule-list element) and/or individual non-recurring weekly schedules (weekly-list element). A monthly schedule may include multiple non-recurring monthly schedules (monthly-list element). In addition, a monthly schedule may include a single reference to either a daily schedule or a montly schedule, not both. User may only specify one monthly-subschedule-info element in the monthly-subschedule-list.

Fields

schedule-id-info

The attributes of a id list

Fields

weekly-info

The attributes of a weekly schedule. May only be used in a weekly schedule.

Fields

weekly-subschedule-info

The attributes of a weekly subschedule. May only be used in a weekly schedule.

Fields

disk-info

Information about a disk.

Fields

application-backup-info

Contains all the information about the application objects in the backup.

Fields

application-object-type

Describes the application object type, for a resource. This string is the type id of the application object that comes from the plug-in.

It must contain between 1 and 255 characters.

Fields

backup-graph-edge-info

An edge in the backup graph. These are used to traverse from a resource to another resource in the backup graph, to extract more specific information.

Fields

backup-graph-info

Contains the information about the various resources and edges to and from these resources, for a backup. The application objects, are stored in the resource-graph, in form of resources and edges connecting these resources.

Fields

backup-graph-resource-info

Resource is an application object.

Fields

backup-location-info

location information for backup. Only the members which are found are returned.

Fields

backup-mount-info

Indicates the mount information of the backup.

Fields

dfm-metadata

Named field in the metadata.

Fields

dp-backup-content-info

Information about contents of a backup of dataset.

Fields

dp-backup-id-info

Backup-version instance information including id and node name for this version.

Fields

dp-backup-info

Backup is a single backed up image of a dataset. If backup is too large to fit on a single volume, the management station uses multiple volumes in the same storageset. In such cases, a single backup may span multiple volumes in a single storageset. The management station keeps track of actual volumes that hold the backup. Caller can identify backup by its numeric identifier.

Fields

dp-backup-path-info

Information about backup path

Fields

dp-backup-retention-type

Retention type to which the backup should be archived. Possible values are: 'hourly', 'daily', 'weekly', 'monthly' and 'unlimited'.

Fields

dp-backup-transfer-info

Indicates the transfer status of the backup between the two nodes directly connected to each other. When present in the input, you must specify at least one of destination-storageset-id, destination-node-name or destination-node-id.

Fields

dp-backup-version-info

Backup-version information including list of all instances of this version.

Fields

dp-file-info

Information about file, directory, qtree, drive etc. on a primary host or in a backup.

Fields

dp-job-data

Describes an unscheduled data protection job. Type of job is specified in dp-job-type element. Only one of all possible dp-*-job-data elements can be present.

Fields

mount-state

Mount session state. Possible values are: 'mounting', 'mounted', 'unmounting'.

Fields

property-info

Describes a property of an object.

Fields

search-key

Key to search the backups by. This key is matched against:

Fields

snapshot-member-info

Information of one qtree or ossv dir that is in snapshot.

Fields

version-member-info

Describes one snapshot member of a backup version.

Fields

application-resource-info

Information about an application resource's namespace, type and count.

Fields

dp-dataset-lag-info

Information about a single dataset.

Fields

dp-relationship-lag-info

Information about a single SnapVault or SnapMirror relationship.

Fields

dr-state-status-count

Number of DR-enabled datasets with a single distinct dr-state/dr-status combination.

Fields

cifs-acl

Information about the user and the permission with which the user can access the CIFS share.

Fields

destroy-member-request-info

Details specific to the 'destroy_member' request.

Fields

dp-job-info

A job is used to run one or more Data Protection or provisioning operations.

Fields

dp-job-overall-status

The overall status of the data protection/provisioning job. The possible values are "queued", "canceling", "canceled", "running", "running_with_failures", "partially_failed", "succeeded", and "failed". This field is derived from dp-job-state and dp-job-progress.
The current mappings are as follow:
 dp-job-state dp-job-progress dp-job-overall-status 
=====================================================
queued * queued
running in_progress running
running success running
running partial_success running_with_failures
running failure running_with_failures
aborting * canceling
aborted * canceled
completed partial_success partially_failed
completed failure failed
completed success succeeded
completed in_progress canceled

Fields

dp-job-progress

The progress of data protection job. Valid values are "in_progress", "partial_success", "success" and "failure".
"in_progress" - No transfers have completed till now.
"success" - Atleast one transfer succeeded, none failed till now
"failure" - Atleast one transfer failed, none succeeded till now.
"partial_success" - Atleast one transfer failed and atleast one transfer succeeded till now.
The progress of the data protection job follows the following state diagram, depending on the success and failure of the transfers handled by the data protection job.
 +------"in_progress"---+
| |
| |
failure? success?
| |
v v
"failure" "success"
| |
| |
success? failure?
| |
+-->"partial_success"<-+

Fields

dp-job-progress-event-info

One historical progress event for a data protection or provisioning job.

Fields

dp-job-state

The state of the data protection/provisioning job. The possible values are "queued", "running", "completed", "aborting", "aborted". This state is derived from completed-timestamp, abort-requested-timestamp and started-timestamp.

Fields

dp-job-type

The type of data protection, provisioning, migration, space management, configuration and Host Service related jobs.

Valid values for data protection jobs are "local_backup", "local_then_remote_backup", "remote_backup", "mirror", "on_demand_backup", "local_backup_confirmation", "restore", "create_relationship", "destroy_relationship", "failover", "backup_mount" and "backup_unmount".

Valid values for provisioning job types are "provision_member", "resize_member", "destroy_member", "delete_snapshots", "dedupe_member", "undedupe_member", "dedupe_volume" and "pm_re_export".

Valid space management jobs are: "dedupe_volume", "migrate_volume", "resize_volume", "delete_snapshot" and "delete_backup". "resize_lun" and "destroy_lun".

Valid migration job types are: "migrate_start", "migrate_complete", "migrate_cancel", "migrate_cleanup", "migrate_update".

Valid configuration job type is: "configuration".

Valid Host Service related jobs are: "hs_configure", "hs_discover", "hs_import_storage_config".

Fields

dp-local-snapshot-job-data

Data for local-snapshot job. If this element is present, job is responsible for taking a local snapshot of data on the storageset associated with the specified node.

Fields

dp-progress-backup-info

Information about the creation or deletion of a backup version.

Fields

dp-progress-failover-info

Information about a failover job.

Fields

dp-progress-job-abort-info

Information about the abort of a job.

Fields

dp-progress-relationship-info

Information about the creation or destruction of a SnapVault or SnapMirror relationship.

Fields

dp-progress-restore-info

Information about restore of a path.

Fields

dp-progress-snapmirror-info

Information about the start and end of a SnapMirror transfer.

Fields

dp-progress-snapshot-info

Information about the creation or deletion of a snapshot.

Fields

dp-progress-snapvault-info

Information about the start and end of a SnapVault transfer.

Fields

dp-scheduled-job

Describes a scheduled data protection job. Each job has a scheduled start time and protects a single dataset. Scheduler service is responsible for starting job at its scheduled time. Type of job is inferred from which dp-*-job-data element is present. Only one of all possible dp-*-job-data elements can be present. Scheduled job does not have a job id. When scheduler service is about to start the job, it creates a job record in the persistent database and at that time job gets its id.

Fields

dp-snapmirror-transfer-job-data

Data for snapmirror-transfer job. If this element is present, job is responsible for transferring data between two storagesets associated with the specified connection using SnapMirror protocol.

Fields

dp-snapvault-transfer-job-data

Data for snapvault-transfer job. If this element is present, job is responsible for transferring data between two storagesets associated with the specified connection using SnapVault protocol.

Fields

dp-timestamp

Seconds since 1/1/1970 in UTC. Range: [0..2^31-1]. This runs out in 2036, so update the API some time before then.

Fields

environment-variable-info

Information about an environment variable.

Fields

job-type

Job type.

Fields

progress-aggregate-info

Information about the aggregate

Fields

progress-cifs-share-info

Information about the CIFS share created as part of provisioning operation.

Fields

progress-host-service-info

Information about the storage service.

Fields

progress-igroup-config-info

Information about initiators being added or removed to/from an initiator group of a job.

Fields

progress-igroup-info

Information about the initiator group of a job.

Fields

progress-lun-info

Information about the lun being provisioned or destroyed in a provisioning job.

Fields

progress-lun-map-info

Information about the mapping of a LUN.

Fields

progress-nfsexport-info

Information about the NFS export created as part of provisioning operation.

Fields

progress-qtree-info

Information about a qtree which is created or destroyed in the provisioning job.

Fields

progress-quota-info

Information about the quota of a qtree.

Fields

progress-resource-pool-info

Information about the resource pool and its members.

Fields

progress-script-run-info

Information about the post provisioning script which is run.

Fields

progress-storage-service-info

Information about the storage service.

Fields

progress-storage-system-info

Information about the discovered storage system.

Fields

progress-vfiler-info

Information about creating a vfiler

Fields

progress-vfiler-storage-info

Information about adding storage to a vFiler.

Fields

progress-volume-dedupe-info

Information about the deduplication operation on the volume.

Fields

progress-volume-info

Information about the volume of on which a provisioning operation is executed.

Fields

progress-volume-option-info

Information about the volume options set.

Fields

provision-request-info

Information of a single provisioning request

Fields

resource-pool-member

Information about the member of the resource pool.

Fields

volume-option

Information about option and its value.

Fields

dataset-reference

The name and id of a dataset.

Fields

dp-ossv-directory-info

Information about one directory.

Fields

dp-ossv-directory-name

Name of one subdirectory. The name is relative to the parent directory ("directory-path" in the dp-ossv-directory-browse-iter-start API). If the parent directory was empty, then this will be an absolute path. The name will always end with a slash character appropriate to the host operating system of the OSSV agent, unless the path is a special directory which has no children.

Fields

dp-ossv-directory-root-info

Information about one filesystem root.

Fields

ossv-application-info

Application related information.

Fields

ossv-application-restore-destination-info

OSSV host related information.

Fields

ossv-host-name-or-id

Name or ID of a host. IP addresses are also accepted.

Fields

dp-policy-connection-info

Information about a connection from one node to another in a DP policy. The connection's properties are represented by optional elements. The rules for when a property is present or absent are defined in the property's Description, and depend on the type of connection.

Fields

dp-policy-content

All content of a single policy, including its name, description, topology (nodes and connections), and the properties for each node and connection.

Fields

dp-policy-info

Contains all information about a single DP policy, including its content and metadata such as its ID.

Fields

dp-policy-node-info

Information about a node in a DP policy. The node's properties are represented by optional elements. The rules for when a property is present or absent are defined in the property's Description, and depend on the type of node.

There is a set of properties associated with nodes that determine how long the system retains backups. Each backup falls into one of these four retention classes: hourly, daily, weekly, or monthly. The following eight properties determine how long backups are retained for each of the retention classes:

A backup expires when its age, in seconds, exceeds the retention duration set for its retention class on the node on which it is stored. After a backup expires, whenever the number of newer backups in its retention class at least equals the retention count for its class, then the expired backup is deleted.

All eight of the retention properties are present only on the root node and on backup secondary nodes. On the root node, the retention properties apply to local snapshots created on the root node. On backup secondary nodes, the retention properties apply to backups of primary node data that are stored on the backup secondary node. The retention properties are absent from mirror destination nodes because a mirror destination retains the same set of backups as its mirror source.

Fields

dp-policy-node-name

Name of a node in a DP policy graph. This typedef is an alias for the builtin ZAPI type string. A node name may contain from 1 to 64 characters. It may start with any character and may contain any combination of characters, except that it may not consist solely of decimal digits ('0' through '9').

The name of each node in a DP policy must be unique, but nodes in different policies may share the same name. Node names are always compared in a case-insensitive manner. This means that, for example, "a" and "A" are considered to be the same name for purposes of: creating new nodes for a new policy, renaming nodes of an existing policy, or looking up a policy's nodes by name. On the other hand, ZAPIs that return node names do not change the capitalization at all. For example, if a node's name has been set to "Backup", then dp-policy-list-iter-next returns its name as "Backup".

Note that a node name has the same format as an obj-name, and has similar properties such as case-insensitivity. However, a node name is not a kind of obj-name because a DP policy node is a part of a DP policy object, but is not itself a first-class DFM object.

Fields

dp-relationship-info

Information about a single SnapVault or SnapMirror relationship. Currently, there are three kinds of relationships: SnapVault, Qtree SnapMirror (QSM) and Volume SnapMirror (VSM).

Fields

operational-status

Status of the relationship. Possible values are "idle", "transferring", "pending", "aborting", "migrating", "quiescing", "resyncing", "waiting", "syncing", "in_sync" or "paused".

Fields

relationship-endpoint-type

Type of an object at either the source or destination of a data protection relationship. Values for source objects are "ossv_directory", "volume" or "qtree". Values for destination objects are "volume" or "qtree".

Fields

relationship-state

State of the relationship. Possible values are "uninitialized", "snapvaulted", "snapmirrored", "broken_off", "quiesced", "source", "unknown" or "restoring"

Fields

relationship-type

Type of a data protection relationship. Legal values are "snapvault", "qtree_snapmirror" and "volume_snapmirror".

Fields

dp-path-info

Information about a path inside a dataset.

Fields

dp-recover-member-info

Information about a member inside a dataset.

Fields

dp-vi-hyperv-restore-settings

Hyper-V specific restore settings.

Fields

dp-vi-object-restore-settings

Various settings for a virtual object to be restored.

Fields

dp-vi-restore-object

A list of virtual objects to be restored, and per object settings for this restore operation.

Fields

dp-vi-restore-settings

Various settings for the operation to restore the virtualization objects.

Fields

dp-vi-vmware-object-restore-settings

Various settings for a VMWare object to be restored.

Fields

dp-vi-vmware-restore-settings

VMware specific restore settings.

Fields

overwrite-result

Information about the member and the destination.

Fields

space-status-result

Information about the destination volume which does not have enough space for the restore

Fields

day-info

The attributes of a day list

Fields

throttle-assignee

Description of a DFM object using the DP throttle.

Fields

throttle-content

Attributes of a throttle schedule

Fields

throttle-item-info

The attributes of a throttle item

Fields

event-action-info

Result of action taken on event. Timestamp returned on success, and error code on failure.

Fields

event-application-type

Denotes the kind of application the event is for.

Possible values: 'monitoring', 'data_protection' , 'performance' , 'performance_diagnosis'

Fields

event-id-type

Event identifier. Range: [1..2^32-1]

Fields

event-info

Event information structure

Fields

event-timestamp-range

range of event timestamps

Fields

event-type-filter

Array of event filters.

Fields

event-class-info

Information about an event class.

Fields

event-class-object

Custom event class.

Fields

event-name-info

Information about an event name.

Fields

target-info

Information of about one target.

Fields

graph-line-value-info

The sample values of a line in a graph.

Fields

group-info

Information about a single group.

Fields

group-name-or-id

Name or ID of a group. If a group name is specified, it must be fully qualified.

Fields

group-option-info

null

Fields

object-name-or-id

Name or ID of an object.

Fields

group

group name or identifier

Fields

group-member-attribute

Additional named attributes of the DFM object Current attributes are: 'OS Version', 'OS Revision', and 'Primary Address'

Fields

group-member-info

null

Fields

access-object-description

Description of an access object. An access object can be a role, usergroup, local user, or a domain user. Description of an access object can include any alphanumeric character, a space, or a punctuation character other than :(colon). Maximum Length: 128 characters

Fields

access-object-id

Identification number (ID) for an access object. An access object can be a role, usergroup, local user or a domain user. The ID for an access object is always assigned by the DFM system. This typedef is an alias for the built-in type integer. Access object IDs are unsigned integers in the range [1..2^31-1].

Fields

access-object-name

Name of an access object. An access object can be a role, usergroup, local user, or a domain user. The rules defined here are not applicable for domain users and usergroups. An access object can contain between 1 and 32 characters and include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |

Fields

access-object-name-or-id

Name or id of an access object. An access object can be a role, usergroup, local user, or a domain user. This must conform to the format of either access-object-name or access-object-id.

Fields

capability

Name of a capability on the host. This must conform to one of the following formats: Here, instead of *, commands and sub-commands can be specified directly. Maximum Length: 64 characters

Fields

domainuser-name

Name of the user in domain\username format. (Ex:NETAPP\rohan) Maximum Length: 288 characters (Domain name can contain up to 255 characters, and username can contain up to 32 characters)

Fields

domainuser-name-or-id-or-sid

Name or id or the SID of domain user. This must conform to one of the following:

Fields

full-ndmp-credentials

A full set of ndmp credentials with no empty fields which is a specific requirement for this zapi since ndmp-username and ndmp-password are both required in order to add an ossv host.

Fields

host-defaults

The default values for the attributes defined by the host.

Fields

host-domainuser-info

Describes the contents of a domain user on the host. The host can be a Storage System or a vFiler unit. Output will always contain all the elements present in the type definition.

Fields

host-dp-filter-info

Data Protection specific information for this iterator. Default is false.

Fields

host-dp-info

Data Protection specific information for this iterator.

Fields

host-dp-modify-info

Data protection specific host information.

Fields

host-id

DFM host identifier.

Range: [1..2^31-1]

Fields

host-info

Host's information.

Fields

host-modify-info

Host information.

Fields

host-name-or-id

DFM host. A host can be a storage system, a vFiler unit, a switch, or an agent. Value can be a DFM object name (maximum 255 characters), a fully qualified domain name (FQDN)(maximum 255 characters), the ip address, or the DFM id [1..2^31-1].

Fields

host-password

Password for logging in. Encrypted using 2-way encryption.

Length: [0..64]

Fields

host-perf-info

Performance specific information for this iterator

Fields

host-role-id

Id of the role on the host.

The ID for a host role is always assigned by the DFM system. Range: [1..2^31-1].

Fields

host-role-info

Describes the contents of a role on the host. The host can be a storage system or a vFiler unit. Output will always contain all the elements present in the type definition.

Fields

host-role-name

Name of the role on the host.

A host role name contains between 1 and 32 characters and can include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |

Fields

host-type

Type of host. Possible values are: filer, vfiler, cluster, cluster-controller, vserver, agent, ossv, and switch.

Fields

host-user-info

Describes the contents of a local user on the host. The host type can be a Storage System or a vFiler unit. Minimum password age, Maximum password age, and status are applicable for hosts running ONTAP versions 7.1 and above. Apart from these fields, output will always contain all the elements present in the type definition.

Fields

host-user-password

Password of the local user on the host. Encrypted using standard 2-way encryption. This must conform to the rules found in options "security.passwd.rules". By default, the password can contain 8 to 256 characters.

Fields

host-usergroup-id

Id of the usergroup on the host

The ID for a host usergroup is always assigned by the DFM system. Range: [1..2^31-1].

Fields

host-usergroup-info

Describes the contents of a usergroup on the host. The host can be a storage system or a vFiler. Output will always contain all the elements present in the type definition.

Fields

host-usergroup-name

Name of the usergroup on the host.

A host usergroup name contains between 1 and 32 characters and can include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |

Fields

host-usergroup-name-or-id

Name or id of usergroup. This must conform to the format of either usergroup-name or access-object-id.

Fields

ip-port-number

IP port number.

Range: [0..2^16-1]

Fields

license

Name of the licensed Data ONTAP service.
Possible values: "nfs", "cifs", "iscsi", "fcp", "multistore", "a_sis", "snapmirror_sync".

Fields

ndmp-credentials

NDMP credentials for a host or a network. Valid only for storage systems and OSSV agents.

Fields

ndmp-username

Name of NDMP user.

Length: [1..32]

Fields

perf-advisor-transport

The transport setting for communicating to the host for collecting performance data. Data collection is disabled when this option is set to Disabled.

Possible values: "http_only", "https_ok", "disabled"

Fields

service-status

Status indicating whether a service is up or down.

Fields

sid

SID (Security Identifier) describing a user. Length: [5..128] characters Format: S-1-5-21-int-int-int-rid RID is a unique random integer generated by storage system/vFiler unit.

Fields

usergroup-name

Name of a usergroup on storage system or vFiler unit. Usergroup name can contain between 1 and 256 characters and include any alphanumeric character, a space, or a punctuation character that is not one of:
" * + , / : ; < = > ? [ ] |

Fields

usergroup-name-or-id

Name or id of usergroup on storage system or vFiler unit. This must conform to the format of either usergroup-name or access-object-id.

Fields

vfiler-info

information of a vFiler unit. Avaiblable only if host-type is "vfiler".

Fields

vfiler-ipspace

Ipspace of vFiler unit.

Fields

vfiler-migration-info

Migration information for a vFiler unit.

Fields

vfiler-network-resource

Information about one vFiler unit IP address

Fields

host-service-info

Information about a host service.

Fields

host-service-package-info

Information about host service package.

Fields

host-service-plugin-info

null

Fields

storage-system-configuration

storage system configuration.

Fields

storage-system-info

Information about a storage system.

Fields

ifc-info

Information of about one interface.

Fields

ldap-server

Information of one LDAP server contains ip address and port number. This uniquely identifies the LDAP server in DFM.

Fields

ldap-server-info

Information of one LDAP server.

Fields

igroup-reference

Name and Id of an iGroup.

Fields

lun-info

Information about a lun.

Fields

migration-dry-run-result

Dry run results of migration of each individual volume

Fields

migration-job-info

Job information of volume migration job.

Fields

migration-request-info

Details of a volume migration request.

Fields

route-migration-mode

Type of migrating routes.

Possible values are: 'static', 'persistent', 'none'.

Fields

volume-aggregate-pair

A volume and aggregate pair.

Fields

netif-ip-interface-info

Information about one network interface.

Fields

network-id

Unique id representing network in DFM. Range: [0..2^31-1]

Fields

network-info

Information of one network.

Fields

cifs-op-stats

Contains the CIFS operations performed by the client on the storage system.

Fields

client-stat-error

Describes an error encountered while collecting per-client statistics on the storage system or vfiler.

Fields

client-stats

Contains the per-protocol operations performed by a client on a storage system.

Fields

counter-threshold-info

Specifies all the attributes of a threshold on a single counter.

Fields

counter-unit

A unit in which a counter is measured. Possible values are per_sec, b_per_sec (bytes/s), kb_per_sec (kb/s), mb_per_sec (mb/s), percent, millisec (milliseconds), microsec (microseconds), sec (seconds) and none.

Fields

day

Day of week. Possible values : 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday' or 'sunday'

Fields

destination-host-info

null

Fields

dynamic-data

Contains data for top-N objects

Fields

host-client-stats

Contains the per-client operations on a host.

Fields

instance-counter-info

A specification for returning data for the given instances and counters.

Fields

invalid-counter-instance

A performance counter, an object instance pair.

Fields

metric

Contains the computation performed and the value

Fields

month

Month of year. Possible values : 'january', 'february', 'march', 'april', 'may', 'june', 'july', 'august', 'september', 'october', 'november' or 'december'

Fields

nfs-op-stats

Contains the NFS operations performed by the client on the storage system.

Fields

object-info

Associated object's information

Fields

perf-assoc-obj-type

Name of the object type. The possible values are

Fields

perf-chart

Information about a chart. A chart is a display (graph) with one or more data sources (lines).

Fields

perf-counter

Describes a performance counter (a measurable quantity on a performance object). This might be, for example, the number of writes per second on a specific volume.

Fields

perf-counter-data

Value of the performance counter.

Fields

perf-counter-group

A counter group. A counter group is a collection of measurable data sources coupled with an associated sampling rate and sample history.

Fields

perf-counter-instance

Defines a perf counter for a specific instance.

Fields

perf-data-group

Definition of a set of data sources and a consolidation method.

Fields

perf-data-source

An unique identifier for a source of data. When optional instance-name field is not specified, perf-data-source represents all relevant instances in a storage system or in a group.

Fields

perf-dependent-counter-info

Describes a performance counter and its dependent counters list, for example system:load_inbound_mbps is dependent on system:net_data_recv, fcp:fcp_write_data, iscsi:iscsi_write_data counters.

Fields

perf-diag-category

Diagnosis Category.

Fields

perf-dynamic-data-sources

A specification for selecting top-n instances for a single counter. Data for each instance is drawn on the chart as one line. e.g. top 5 busy storage systems in a group.

Fields

perf-health-check

A check which if violated could affect the performance of the given object.

Fields

perf-instance

Describes an instance. An instance is a manifestation of a performance object. For example, the performance object might be "VOLUME" whereas the instance might be "vol0".

Fields

perf-instance-counter-data

Array of counter values of an instance.

Fields

perf-label

When a counter has more than one value (an array), the labels describe the indvidual array entries, and define an implicit indexing scheme into the array. The array may be multi-dimensional, in which case each occurrence of a perf-label represents a dimension in the array of labels.

Fields

perf-line

A line of data on the chart. Often a line will have a single data source (e.g. CPU percent busy), but we allow for the capability for many sources to be combined (e.g. average CPU busy over 10 storage systems).

Fields

perf-missing-host-object-counter

Counters not available with a particular host.

Fields

perf-object

A performance object. A performance object is a (somewhat abstract) representation of a class of measurable items. These are roughly akin to system subcomponents or protocols, for example "VOLUME", "DISK" or "NFS". These are broad classes and do not represent specific instances.

Fields

perf-object-counter

Performance Counters can be a combination of an object and a counter.

Fields

perf-object-counter-info

Performance Counters can be a combination of an object and a counter.

Fields

perf-purged-host-object-counter

Lists the effected counters deatils for a particular host.

Fields

perf-purged-object-counters-details

Lists the effected counters for each object along with retention period details.

Fields

perf-report

The description of a performance report.

Fields

perf-server-status

Describes the status of perf server.

Fields

perf-status-error

Possible resons for data unavailability.

Fields

perf-threshold

The description of a performance threshold.

Fields

perf-view

A performance view.

Fields

resource-property-condition

Specifies conditions on resource properties. In case of values for the same property, the conditions are assumed to be either-or conditions. For example, if the conditions "Disk RPM=7200" and "Disk RPM=10000" are specified, the threshold applies to all disks with speed 7200 RPM or 10000 RPM. In case of conditions on different properties, the conditions are assumed to be "and" conditions. For example, if the conditions name = "Disk RPM", value = "10000" and name="Filer Model", value="FAS270" are specified, the threshold applies to all disks with speed 10000 RPM on FAS270 storage systems. During template modification, if no resource properties are specified, existing resource properties, if any, will be cleared.

Fields

stat-id

The id, in the DFM database, of a collection of per-client statistics. Range: [ 1 .. 2^31 - 1 ]

Fields

threshold-id

A unique numeric identifier used to specify a threshold. This parameter is mandatory when modifying a threshold. This typedef is an alias for the builtin ZAPI type integer. Range: [1..2^31-1]

Fields

threshold-info

Defines all the attributes of a threshold.

Fields

threshold-info2

Defines a performance counter based threshold.

Fields

threshold-template

The description of a threshold template.

Fields

threshold-template-info

Contains info on performance threshold templates

Fields

time-filter

Filter data based on selected time range.

Fields

time-range

Holds the 'from' and 'to' range as minutes in a day. Ex: 9.25AM to 4.45PM will become from=565 and to=1005

Fields

view-type

Type of the view. Possible values are:

Fields

advanced-option-info

A set of advanced options to be set on the provisioned storage containers.

Fields

custom-provisioning-script-settings

Post provisioning script settings, applicable only for "nas" or "san" type of provisioning policies.

Fields

dataset-member-used-space-thresholds

Full and nearly full thresholds for generating events on used space of dataset members.

Fields

nas-container-settings

Details of capacity settings like space guarantees, quotas, out of space actions when provisioning storage for NAS access.

Fields

provisioning-policy-info

Contains all information about a single provisioning policy.

Fields

provisioning-policy-type

type of provisioning policy. Possible values are "san", "nas" and "secondary".

Fields

san-container-settings

Space settings when provisioning storage for san access, specify how space will be allocated to various components in san environment.

Fields

storage-container-type

Storage container type to provision as dataset members when provisioning storage for SAN or NAS access. Possible values are "lun" and "volume" and "qtree".
If the value is "lun", Provisioning Manager will provision LUNs into the dataset and map to the hosts specified by creating igroups. This is applicable only when provisioning for SAN access.
If the value is "volume", Provisioning Manager will create FlexVols, the LUNs will then be created using external tools like SnapDrive, or in case of NAS, Provisioning Manager will export the volume to be accessed by clients.
If the value is "qtree", Provisioning Manager will provision qtrees for every provisioning in case of NAS access.

Fields

storage-reliability

NetApp Storage Systems offer a wide range of storage availability features which provide protection against various failure like disk drive failures, shelf failures, controller failures or site failures. The required level of reliability of the dataset can be specified in the provisioning policy.

Fields

qtree-info

Information about a Qtree.

Fields

qtree-size-info

Sizes of various parameters of a qtree. File counts are simple counts.

Fields

admin-info

Details of a administrator.

Fields

aggregate-name-or-id

Details of an aggregate name or id. When used as input only one of aggregate-name or aggregate-id is specified. When used as output, both of them will be returned.

Fields

aggregate-resource

Details of an aggregate resource. aggregate-resource-name-or-id must be specified. If aggregate-name is specified, then need to also specify filer-identifier.

Fields

dataset-resource

Details of a DFM dataset resource. DFM dataset name or object id of a DFM dataset. When used as input only one of dataset-name or dataset-id is specified. When used as output, both of them will be returned.

Fields

filer-resource

Details of a storage system resource. When used as input only one of filer-name or filer-id is specified. When used as output, both of them will be returned.

Fields

group-resource

Details of a DFM group resource. DFM group name or object id of a DFM group. When used as input only one of group-name or group-id is specified. When used as output, both of them will be returned.

Fields

host-resource

Identifies a host resource. When used as input only one of host-name orhost-id is specified. When used as output, both of them will be returned

Fields

lun-name-or-id

Details of a LUN name or id. When used as input only one of lun-name or lun-id is specified. When used as as output both will be returned. If a lun-name is specified, then either volume-identifier or host-identifier must also be specified.

Fields

lun-resource

Details of a LUN. lun-identifier-name-or-id must be specified. If lun-name is specified, then either volume-identifier or host-identifier must also be specified. See the description for lun-name for more information.

Fields

policy-resource

Identifies a policy resource. When used as input one or more of policy-name or policy-id is specified. When used as output, both of them will be returned. Policy can refer to either a protection policy, a provisioning policy or an application policy.

Fields

qtree-name-or-id

A qtree name or id. When used as input only one of qtree-name or qtree-id is specified. When used as output, both of them will be returned. If qtree-name is specified, then either host-identifier or volume-identifier must also be specified but not both.

Fields

qtree-resource

Details of a qtree. qtree-identifier-name-or-id must be specified. If qtree-name is specified, then either volume-identifier or host-identifier must also be specified. See the description for qtree-name for more information.

Fields

rbac-admin-name-or-id

an admin name or object id

Fields

rbac-admin-or-usergroup

An admin or usergroup. When used as an input element, specify only one of admin-or-usergroup-name or admin-or-usergroup-id (not both). When used as an output element, both of them are returned.

Fields

rbac-operation

An operation

Fields

rbac-operation-name-details

more details of an operation

Fields

rbac-resource-operation

operation assigned to a given resource

Fields

rbac-role-resource

Identifies an RBAC role resource. When used as input only one of rbac-role-name or rbac-role-id is specified. When used as output, both of them will be returned

Fields

resource-identifier

Identifies a resource. Only one resource field must be set. i.e. one of resource-id, rbac-role, host, group, storage system, vfiler, aggregate, volume, resource-pool, dataset, qtree, protection policy, provisioning policy, lun or vFiler template. When an object id is specified, it refers to the object id field in the objects table from the DFM database.

Fields

resource-pool-resource

Details of a DFM resource-pool resource. DFM resource-pool name or object id of a DFM resource-pool. When used as input only one of resource-pool-name or resource-pool-id is specified. When used as output, both of them will be returned.

Fields

role-attributes-identifier

The attributes of a role: role name and id, inherited roles, capabilities and operations.

Fields

storage-service-resource

Identifies a storage service resource. When used as input, only one of storage-service-name or storage-service-id is specified. When used as output, both of them will be returned.

Fields

vfiler-resource

Details of a vfiler resource. When used as input only one of vfiler-name-or-uuid or vfiler-id is specified. When used as output, both of them will be returned.

Fields

vfiler-template-resource

Identifies a vfiler template resource. When used as input only one of vfiler-template-name or vfiler-template-id is specified. When used as output, both of them will be returned.

Fields

volume-name-or-id

A volume name or id. When used as input only one of volume-name or volume-id is specified. When used as output, both of them will be returned. If volume-name is specified, then either host-identifier, vfiler-identifier or aggregate-identifier must also be specified but not both.

Fields

volume-resource

Details of a volume.

Fields

category-list-info

Contains the details of the categories

Fields

graph-info

Describes the meta data of a graph

Fields

graph-line-info

Describes the meta data of a line in a graph.

Fields

report-application

Specifies the application to which a report belongs. This typedef is an alias for the built-in ZAPI type string. Possible values:

Fields

report-info

Describes the name of the report.

Fields

report-list-info

Contains the details of the reports.

Fields

report-output-format

The type of output format. This typedef the built-in ZAPI type string. Possible values:

Fields

report-output-info

Describes the contents of an output.

Fields

report-provenance

Possible values:"custom" returns only custom and modified_canned reports. "canned" returns only canned reports. If not specified all reports are returned.

Fields

report-schedule-content-info

Describes the contents of a report schedule.

Fields

report-schedule-info

Detailed contents of a report schedule.

Fields

dataset-space-info

Information about a datset consuming space from the aggregate.

Fields

resource-tag

Label for the resource. Can be of maximum 255 characters.

Fields

resourcepool-defaults

The default values of the attributes defined by this ZAPI.

Fields

resourcepool-info

Information about a resource pool.

Fields

resourcepool-member-info

Information about one member of a resource pool.

Fields

backup-info

Information of a dataset backup.

Fields

snapshot

Information of one Snapshot copy. Either unique-id or name of the Snapshot copy should be specified.

Fields

snapshot-dependency

Application dependent on this Snapshot copy. Possible values: "snapmirror", "snapvault", "dump", "volume_clone", "lun_clone", "snaplock", "acs".

Fields

snapshot-info

Information on one particular Snapshot copy.

Fields

srm-file-type-info

Information of one SRM file type.

Fields

resourcepool-name-or-id

Resource Pool. Value can be a DFM object name (maximum 255 characters) or the DFM id [1..2^31-1].

Fields

storage-service-dataset-info

null

Fields

storage-service-info

null

Fields

storage-service-node-attributes

null

Fields

storage-service-node-info

null

Fields

storage-set-info

null

Fields

timezone-defaults

The default time zone settings.

Fields

timezone-info

Information about a time zone.

Fields

user-report-info

Report count of users.

Fields

ip-binding-info

vFiler IP Address to interface binding information.

Fields

protocol

Name of the protocol. Possible values: "nfs", "cifs", "iscsi".

Fields

vfiler-template-info

Information about a vFiler Template.

Fields

datacenter

Information about a Data Center.

Fields

datacenter-reference

Information about a hypervisor.

Fields

datastore

Information about a Datastore.

Fields

datastore-reference

Name and object identifier of the datastore object.

Fields

hypervisor-info

Information about a Hypervisor.

Fields

hypervisor-reference

Information about a hypervisor.

Fields

virtual-center

Virtual Center information.

Fields

virtual-center-reference

Information about a hypervisor.

Fields

virtual-disk

Information about Virtual Disk.

Fields

virtual-infrastructure-type

Type of virtual infrastructure. The possible values are :-

Fields

virtual-machine

Information about a Virtual Machine.

Fields

virtual-machine-reference

Name and object identifier of the virtual machine object.

Fields

migration-incompatibility-reason

A reason the volume is not capable of migration.

Possible values are:

Fields

object-space-status

Space status of the object. This indicates the fullness of the object in terms of whether the percentage of used space with respect to total size of the object has reached the fullness thresholds. Possible values:

Fields

timestamp

Seconds since 1/1/1970 in UTC. Range: [0..2^31-1].

Fields

volume-dedupe-info

Volume deduplication information. Optional fields will not be returned if deduplication has never run on the volume.

Fields

volume-info

Information about a volume.

Fields

volume-qtree

Information on qtrees in the volume

Fields

volume-size

Collected size information about a volume. Optional items will not be returned if DFM does not know the value.

Fields


COPYRIGHT

Copyright (c) 1994-2013 NetApp, Inc. All rights reserved.

The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications.

RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).