When a virtual machine looks at its disk, it sees something which looks like a real physical disk. This is an illusion. In reality the bytes of data written to the "virtual disk" probably reside in a file in a filesystem or in a logical volume on some physical storage system that the VM cannot see.
We call the real physical storage system a Storage Repository (SR).
We call the virtual disks within the SR volumes.
When a VM is installed, a volume will be created. Typically this volume will be deleted when the VM is uninstalled. The Xapi toolstack doesn't know how to manipulate volumes on your storage system directly; instead it delegates to "Volume plugins": implementation-specific plugins which know how to talk the storage-specific APIs. These volume plugins can be anything from simple scripts in domain 0 to sophisticated services running somewhere on the network.
Consider for example a system using Linux LVM, where individual LVs are mapped to VMs as virtual disks. The volume plugin for LVM could implement the Volume.create API by simply calling
lvcreate -n name -L 64GiB -Z n
Consider another example where volumes are simple sparse files stored on an NFS share. The volume plugin could implement the Volume.create API by simply calling:
dd if=/dev/zero of=disk.name count=1 skip=64G
VMs running on the Xen hypervisor use special shared-memory protocols to access their disks and network interfaces. There are several implementations of these protocols including:
With so many implementations to choose from, which one should we use for a given volume? This decision - and how to configure the implementation for maximum performance - is the job of the Datapath plugin.
Every volume has one or more URIs, which describe how to access the data within the volume. Examples include:
The Xapi toolstack takes the list of URIs provided by the Volume plugin and creates a connection between the VM and the disk. Xapi chooses a "Datapath plugin" based on the URI scheme. The Datapath plugin returns Xen-specific connection details, including choice of backend (kernel blkback or userspace qemu) and caching options.