Project Stratos was our pathfinder project looking to establish VirtIO as the standard interface between hypervisors, freeing a mobile, industrial or automotive platform to migrate between hypervisors and reuse the backend implementation. We helped upstream a number of VirtIO devices to the OASIS Specification as well front end drivers for the Linux Kernel. During the project we experimented with using Rust for backend implementations and eventually became maintainers of the Rust VMM vhost-device repository. The repository provides a number of vhost-user backends which we have run as backends for devices in guests running under a variety of hypervisors (pure QEMU emulation, KVM, Xen and Gunyah).
We also spent time investigating the impacts of virtualisation on networking and examining where in the stack delay and jitter can get introduced and if those can be mitigated by using Time Sensitive Networking (TSN).
While Project Stratos itself has come to an end Linaro is still heavily invested in furthering VirtIO as a technology and will continue to work with it under the auspices of our other projects. We are keeping the project infrastructure up for reference as well as proving a useful place to share patches for the various projects that want to enable VirtIO.
VirtIO Use Cases
When bringing up new processor and SoC platforms being able to utilise VirtIO devices avoids the need to emulate existing (or invent toy) devices for basic networking and storage functionality. The same low impact to virtualisation also benefits emulation by avoiding costly MMIO emulation round-trips for setting up transactions.
One of the challenges of working in the heterogeneous world of SoCs is dealing with vendor kernel trees to support SoC specific hardware. A solution to this is to house your main workload in a generic guest image which uses VirtIO for its devices. The SoC specific drivers can be kept in a separate driver domain and act as an interface between a VirtIO backend for the main image and directly controlling the hardware. This way the main workload can be upgraded independently of the backends allowing it to progress to using newer kernel features.
Cloud Native Development
This extends the concept of having an abstracted HAL to allow the testing and verification of your workload to be done in the cloud. For example you may want to check your collision avoidance system works when fed a library of data from the cloud but not want to recertify everything once the workload is houses in your edge computing device. The use of a VirtIO abstraction allows the same software to be run in both cases.
rust-vmm is a collection of rust crates useful for building Virtual Machine Monitors. It is used as a basic for projects such as Firecracker, CrosVM and Cloud Hypervisor. While initially KVM focused it strives to be a multi-platform project.
vhost-device binaries (we jointly maintain this and it hosts the source for our vhost-user daemons)
Xen Vhost Master (used to support the vhost-user daemons on Xen)
Blog Posts and Talks
VirtIO on Xen hypervisor, talk by EPAM @ Linaro Connect 2021
VirtIO HALs and other abstractions, talk by Alex @ GSTS 2021 (Day 2 @ 5:13:00 in the video feed)
Rust Based Virtio Backends for Hypervisor Agnostic Solutions, talk by Viresh and Alex @ KVM Forum 2022 (YouTube)
The Challenges of Abstracting VirtIO, blog post by Alex
Network Latency with TSN on Virtual Machine, blog post by Akashi-san
We no longer have regular meetings for Stratos but we will add any upcoming open meetings discussing VirtIO to the calendar for ease of navigation. This calendar is displayed using UTC timezone with no DST offsets.
All of the work is covered in Jira cards which you can view in the current sprint at the project home page and navigate from there: https://linaro.atlassian.net/jira/software/c/projects/STR/boards/145
Original Landing Page
Landing page before archiving the project
Linaro Ltd, 2020