Frequently asked questions: how do I...

test my code?

Although it's not enforced by the interface, plugin implementations should avoid interacting with the toolstack so that they can be easily tested in isolation. The OCaml and python generated code includes a convenient command-line parser so if you write:

class Implementation(xapi.volume.SR_skeleton):
    pass

if __name__ == "__main__":
    cmd = xapi.volume.SR_commandline(Implementation())
    cmd.attach()
You'll be able to run the command like this:
$ ./SR.attach 
usage: SR.attach [-h] [-j] dbg uri
SR.attach: error: too few arguments

$ ./SR.attach -h
usage: SR.attach [-h] [-j] dbg uri

[attach uri]: attaches the SR to the local host. Once an SR is attached then
volumes may be manipulated.

positional arguments:
  dbg         Debug context from the caller
  uri         The Storage Repository URI

optional arguments:
  -h, --help  show this help message and exit
  -j, --json  Read json from stdin, print json to stdout

report dynamic properties like space consumption?

Dynamic properties like space consumption, bandwidth or latency should be exposed as "datasources". The SR.stat function should return a list of URIs pointing at these in "xenostats" format. The toolstack will hook up these datasources to the xcp-rrdd daemon which will record history. XenAPI clients can then use the RRD API to fetch the data, draw graphs etc.

expose backend-specific functions?

The SMAPIv3 is intended to be a generic API. Before extending the SMAPIv3 itself, first ask the question: would this make sense for 3 completely different storage stacks (e.g. consider Ceph, LVM over iSCSI and gfs2). If the concept is actually general then propose an SMAPIv3 update via a pull request. If the concept is actually backend-specific then consider adding a new XenAPI extension for it and name the API appropriately (e.g. "LVHD.foo").

call xapi?

Nothing in the interface prevents you from making RPC calls to xapi or other toolstack components, however doing so will make it more difficult to test your component in isolation.

In the past, a common reason to call xapi was to store data in the xapi database, for example the "sm-config" fields. This was unreliable because

It is strongly recommended to store all storage-related state on the storage medium. This ensures that the metadata has a "shared fate" with the data: if data is restored from backup, reverted to a snapshot, then so is the metadata.

tie my cluster to the xapi Pool?

Ideally a storage cluster would be managed separately from a xapi pool, with it's own configuration and monitoring interfaces. The storage cluster could be very large (consider Ceph-style scale-out) while the xapi pool is designed to remain within a rack.

In the past, a common reason to tie a storage cluster to the xapi pool was to piggyback on the xapi notions of a single pool master, HA and inter-host authenticated RPC mechanisms to co-ordinate activities sych as vhd coalescing. If it is still necessary to tie a storage cluster to a xapi pool then the storage implementation should launch it's own "pool monitor" service which could use the xapi pool APIs to track host membership and master status. Note: this might require adding new capabilities to xapi's pool APIs, but they should not be part of the storage API itself.

Note: in the case where a particular storage implementation requires a particular HA cluster stack to be running, this can be declared in the Plugin.Query call.