docs/api.md
The Cloud Hypervisor API is made of 2 distinct interfaces:
The External API This is the user facing API. Users and operators can control and manage the Cloud Hypervisor through various options including a REST API, a Command Line Interface (CLI) or a D-Bus based API, which is not compiled into Cloud Hypervisor by default.
The internal API, based on rust's Multi-Producer, Single-Consumer (MPSC) module. This API is used internally by the Cloud Hypervisor threads to communicate with each other.
The goal of this document is to describe the Cloud Hypervisor API as a whole, and to outline how the internal and external APIs are architecturally related.
The Cloud Hypervisor REST API triggers VM and VMM specific actions, and as such it is designed as a collection of RPC-style, static methods.
The API is OpenAPI 3.0 compliant. Please consult the Cloud Hypervisor OpenAPI Document for more details about the API payloads and responses.
The REST API, if enabled, is available as soon as the Cloud Hypervisor binary is started,
through either a local UNIX socket as given in the Cloud Hypervisor option --api-socket path=...
or a fd with --api-socket fd=....
$ ./target/debug/cloud-hypervisor --api-socket path=/tmp/cloud-hypervisor.sock
The Cloud Hypervisor API exposes the following actions through its endpoints:
| Action | Endpoint | Request Body | Response Body | Prerequisites |
|---|---|---|---|---|
| Check for the REST API availability | /vmm.ping | N/A | /schemas/VmmPingResponse | N/A |
| Shut the VMM down | /vmm.shutdown | N/A | N/A | The VMM is running |
| Action | Endpoint | Request Body | Response Body | Prerequisites |
|---|---|---|---|---|
| Create the VM | /vm.create | /schemas/VmConfig | N/A | The VM is not created yet |
| Delete the VM | /vm.delete | N/A | N/A | N/A |
| Boot the VM | /vm.boot | N/A | N/A | The VM is created but not booted |
| Shut the VM down | /vm.shutdown | N/A | N/A | The VM is booted |
| Reboot the VM | /vm.reboot | N/A | N/A | The VM is booted |
| Trigger power button of the VM | /vm.power-button | N/A | N/A | The VM is booted |
| Pause the VM | /vm.pause | N/A | N/A | The VM is booted |
| Resume the VM | /vm.resume | N/A | N/A | The VM is paused |
| Take a snapshot of the VM | /vm.snapshot | /schemas/VmSnapshotConfig | N/A | The VM is paused |
| Perform a coredump of the VM* | /vm.coredump | /schemas/VmCoredumpData | N/A | The VM is paused |
| Restore the VM from a snapshot | /vm.restore | /schemas/RestoreConfig | N/A | The VM is created but not booted |
| Add/remove CPUs to/from the VM | /vm.resize | /schemas/VmResize | N/A | The VM is booted |
| Add/remove memory from the VM | /vm.resize | /schemas/VmResize | N/A | The VM is booted |
| Resize a disk attached to the VM | /vm.resize-disk | /schemas/VmResizeDisk | N/A | The VM is created |
| Add/remove memory from a zone | /vm.resize-zone | /schemas/VmResizeZone | N/A | The VM is booted |
| Dump the VM information | /vm.info | N/A | /schemas/VmInfo | The VM is created |
| Add VFIO PCI device to the VM | /vm.add-device | /schemas/VmAddDevice | /schemas/PciDeviceInfo | The VM is booted |
| Add disk device to the VM | /vm.add-disk | /schemas/DiskConfig | /schemas/PciDeviceInfo | The VM is booted |
| Add fs device to the VM | /vm.add-fs | /schemas/FsConfig | /schemas/PciDeviceInfo | The VM is booted |
| Add generic vhost-user device to the VM | /vm.add-generic-vhost-user | /schemas/GenericVhostUserConfig | /schemas/PciDeviceInfo | The VM is booted |
| Add pmem device to the VM | /vm.add-pmem | /schemas/PmemConfig | /schemas/PciDeviceInfo | The VM is booted |
| Add network device to the VM | /vm.add-net | /schemas/NetConfig | /schemas/PciDeviceInfo | The VM is booted |
| Add userspace PCI device to the VM | /vm.add-user-device | /schemas/VmAddUserDevice | /schemas/PciDeviceInfo | The VM is booted |
| Add vdpa device to the VM | /vm.add-vdpa | /schemas/VdpaConfig | /schemas/PciDeviceInfo | The VM is booted |
| Add vsock device to the VM | /vm.add-vsock | /schemas/VsockConfig | /schemas/PciDeviceInfo | The VM is booted |
| Remove device from the VM | /vm.remove-device | /schemas/VmRemoveDevice | N/A | The VM is booted |
| Dump the VM counters | /vm.counters | N/A | /schemas/VmCounters | The VM is booted |
| Inject an NMI | /vm.nmi | N/A | N/A | The VM is booted |
| Prepare to receive a migration | /vm.receive-migration | /schemas/ReceiveMigrationData | N/A | N/A |
| Start to send migration to target | /vm.send-migration | /schemas/SendMigrationData | N/A | The VM is booted and (shared mem or hugepages enabled) |
vmcoredump action is available exclusively for the x86_64
architecture and can be executed only when the guest_debug feature is
enabled. Without this feature, the corresponding REST API or
D-Bus API endpoints are not available.For the following set of examples, we assume Cloud Hypervisor is started with
the REST API available at /tmp/cloud-hypervisor.sock:
$ ./target/debug/cloud-hypervisor --api-socket /tmp/cloud-hypervisor.sock
We want to create a virtual machine with the following characteristics:
/opt/clh/kernel/vmlinux-virtio-fs-virtio-iommu/opt/clh/images/focal-server-cloudimg-amd64.raw#!/usr/bin/env bash
curl --unix-socket /tmp/cloud-hypervisor.sock -i \
-X PUT 'http://localhost/api/v1/vm.create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"cpus":{"boot_vcpus": 4, "max_vcpus": 4},
"payload":{"kernel":"/opt/clh/kernel/vmlinux-virtio-fs-virtio-iommu", "cmdline":"console=ttyS0 console=hvc0 root=/dev/vda1 rw"},
"disks":[{"path":"/opt/clh/images/focal-server-cloudimg-amd64.raw"}],
"rng":{"src":"/dev/urandom"},
"net":[{"ip":"192.168.10.10", "mask":"255.255.255.0", "mac":"12:34:56:78:90:01"}]
}'
Once the VM is created, we can boot it:
#!/usr/bin/env bash
curl --unix-socket /tmp/cloud-hypervisor.sock -i -X PUT 'http://localhost/api/v1/vm.boot'
We can fetch information about any VM as soon as it's created:
#!/usr/bin/env bash
curl --unix-socket /tmp/cloud-hypervisor.sock -i \
-X GET 'http://localhost/api/v1/vm.info' \
-H 'Accept: application/json'
We can reboot a VM that's already booted:
#!/usr/bin/env bash
curl --unix-socket /tmp/cloud-hypervisor.sock -i -X PUT 'http://localhost/api/v1/vm.reboot'
Once booted, we can shut a VM down from the REST API:
#!/usr/bin/env bash
curl --unix-socket /tmp/cloud-hypervisor.sock -i -X PUT 'http://localhost/api/v1/vm.shutdown'
Cloud Hypervisor offers a D-Bus API as an alternative to its REST API. This D-Bus API fully reflects the functionality of the REST API, exposing the same group of endpoints. It can be a drop-in replacement since it also consumes/produces JSON.
In addition, the D-Bus API also exposes events from event-monitor in the
form of a D-Bus signal to which users can subscribe. For more information,
see D-Bus API Interface.
This feature is not compiled into Cloud Hypervisor by default. Users who
wish to use the D-Bus API must explicitly enable it with the dbus_api
feature flag when compiling Cloud Hypervisor.
$ ./scripts/dev_cli.sh build --release --libc musl -- --features dbus_api
Once this feature is enabled, it can be configured with the following CLI options:
--dbus-service-name
well known name of the service
--dbus-object-path
object path to serve the dbus interface
--dbus-system-bus use the system bus instead of a session bus
Example invocation:
$ ./cloud-hypervisor --dbus-service-name "org.cloudhypervisor.DBusApi" \
--dbus-object-path "/org/cloudhypervisor/DBusApi"
This will start serving a service with the name org.cloudhypervisor.DBusApi1
which in turn can be used to control and manage Cloud Hypervisor.
Please refer to the REST API documentation for everything that is in common with the REST API. As previously mentioned, the D-Bus API can be used as a drop-in replacement for the REST API.
The D-Bus interface also exposes a signal, named Event, which is emitted
whenever a new event is published from the event-monitor crate. Here is its
definition in XML format:
<node>
<interface name="org.cloudhypervisor.DBusApi1">
<signal name="Event">
<arg name="event" type="s"/>
</signal>
</interface>
</node>
The Cloud Hypervisor Command Line Interface (CLI) can only be used for launching the Cloud Hypervisor binary, i.e. it cannot be used for controlling the VMM or the launched VM once they're up and running.
If you want to inspect the VMM, or control the VM after launching Cloud Hypervisor from the CLI, you must use either the REST API or the D-Bus API.
From the CLI, one can:
cloud-hypervisor --help for a complete list of CLI
options. As soon as the cloud-hypervisor binary is launched, contrary
to the D-Bus API, the REST API is available
for controlling and managing the VM. The D-Bus API doesn't start
automatically and needs to be explicitly configured in order to be run.The REST API, D-Bus API and the CLI all rely on a common, internal API.
The CLI options are parsed by the clap crate and then translated into internal API commands.
The REST API is processed by an HTTP thread using the
Firecracker's micro_http
crate. As with the CLI, the HTTP requests eventually get translated into
internal API commands.
The D-Bus API is implemented using the zbus crate and runs in its own thread. Whenever it needs to call the internal API, the blocking crate is used to perform the call in zbus' async context.
As a summary, the REST API, the D-Bus API and the CLI are essentially frontends for the internal API:
+------------------+
REST API | |
+--------->+ micro_http +--------+
| | | |
| +------------------+ |
| | +------------------------+
| | | |
+------------+ | +----------+ | | |
| | | D-Bus API | | | | +--------------+ |
| User +---------+----------->+ zbus +--------------+------> | Internal API | |
| | | | | | | +--------------+ |
+------------+ | +----------+ | | |
| | | |
| | +------------------------+
| +----------+ | VMM
| CLI | | |
+----------->+ clap +--------------+
| |
+----------+
The Cloud Hypervisor internal API, as its name suggests, is used internally by the different Cloud Hypervisor threads (VMM, HTTP, D-Bus, control loop, etc) to send commands and responses to each other.
It is based on rust's Multi-Producer, Single-Consumer (MPSC), and the single consumer (a.k.a. the API receiver) is the Cloud Hypervisor control loop.
API producers are the HTTP thread handling the REST API, the D-Bus thread handling the D-Bus API and the main thread that initially parses the CLI.
The internal API is designed for controlling, managing and inspecting a Cloud Hypervisor VMM and its guest. It is a backend for handling external, user visible requests through the REST API, the D-Bus API or the CLI interfaces.
The API follows a command-response scheme that closely maps the REST API. Any command must be replied to with a response.
Commands are MPSC based messages and are received and processed by the VMM control loop.
In order for the VMM control loop to respond to any internal API command, it must be able to send a response back to the MPSC sender. For that purpose, all internal API command payload carry the Sender end of an MPSC channel.
The sender of any internal API command is therefore responsible for:
In order to further understand how the external and internal Cloud Hypervisor APIs work together, let's look at a complete VM creation flow, from the REST API call, to the reply the external user will receive:
#!/usr/bin/env bash
curl --unix-socket /tmp/cloud-hypervisor.sock -i \
-X PUT 'http://localhost/api/v1/vm.create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"cpus":{"boot_vcpus": 4, "max_vcpus": 4},
"payload":{"kernel":"/opt/clh/kernel/vmlinux-virtio-fs-virtio-iommu", "cmdline":"console=ttyS0 console=hvc0 root=/dev/vda1 rw"},
"disks":[{"path":"/opt/clh/images/focal-server-cloudimg-amd64.raw"}],
"rng":{"src":"/dev/urandom"},
"net":[{"ip":"192.168.10.10", "mask":"255.255.255.0", "mac":"12:34:56:78:90:01"}]
}'
VmConfig structure.VmConfig structure and the response channel:
VmCreate(Arc<Mutex<VmConfig>>, Sender<ApiResponse>)
// Send the VM creation request.
api_sender
.send(ApiRequest::VmCreate(config, response_sender))
.map_err(ApiError::RequestSend)?;
api_evt.write(1).map_err(ApiError::EventFdWrite)?;
response_receiver.recv().map_err(ApiError::ResponseRecv)??;
// Read from the API receiver channel
let api_request = api_receiver.recv().map_err(Error::ApiRequestRecv)?;
VmCreate payload, and extracts both the VmConfig structure and the
Sender from the
command payload. It stores the VmConfig structure and replies back to the
sender (The HTTP thread):
match api_request {
ApiRequest::VmCreate(config, sender) => {
// We only store the passed VM config.
// The VM will be created when being asked to boot it.
let response = if self.vm_config.is_none() {
self.vm_config = Some(config);
Ok(ApiResponsePayload::Empty)
} else {
Err(ApiError::VmAlreadyCreated)
};
sender.send(response).map_err(Error::ApiResponseSend)?;
}
VmCreate HTTP handler. Depending on the
control loop internal API response, it generates the appropriate HTTP
response:
// Call vm_create()
match vm_create(api_notifier, api_sender, Arc::new(Mutex::new(vm_config)))
.map_err(HttpError::VmCreate)
{
Ok(_) => Response::new(Version::Http11, StatusCode::NoContent),
Err(e) => error_response(e, StatusCode::InternalServerError),
}