2024-08 kernel prototype
This work is a stepping stone toward other forms of virtio-msg.
It leverages work done in prior projects to unblock the kernel work on the common virtio-msg transport layer.
The virtio-msg-bus level here is not intended for real implementations as it has no advantage over virtio-mmio.
This bus implementation relies on any response being available immediately after sending, which is only possible in a trap and emulate environment
It also assumes that the driver side kernel can put buffers anywhere it chooses to and uses Guest PA in virtqueues.
These items will be addressed in later work.
From @Viresh Kumar Aug 28, 2024
I have a working setup now where we can test Virtio-msg between backend running at Xen dom0 and frontend running at Guest kernel in domU.
We had a test setup (from ORKO days), where we used to test Rust based virtio backends with Xen.
A brief history about that first: QEMU understands MMIO and can trap Virtio-MMIO accesses and talk to backend based on that, but Xen doesn't. For this we created `xen-vhost-frontent`, which is the entity that understands Virtio-MMIO protocol and runs on dom0. The call chain from frontend driver (I2c for example) to backend is:
Guest kernel I2c Driver (domU) ->
Guest kernel virtio-mmio layer (domU) ->
Xen-vhost-frontend (traps guest memory access with Xen IOREQ) (dom0) ->
Rust based I2c-Backend (dom0)
I imagined that maybe we can just replace `Guest kernel virtio-mmio layer (domU)` with `Guest kernel virtio-msg layer (domU)` and still use memory-trapping to test the virtio-msg layer properly.
The call chain is like this now:
Guest kernel I2c Driver (domU) -> Guest kernel virtio-msg layer (domU) ->
Guest kernel virtio-msg-mmio layer (domU) ->
Xen-vhost-frontend (traps guest memory access with Xen IOREQ) (dom0) ->
Rust based I2c-Backend (dom0)
My test setup (explained below) does exactly that and it works. Now we can just add another layer for Virtio-FFA (will replace `Guest kernel virtio-msg-mmio layer (domU)` in the above call chain) and it should just work ? 🙂
Steps to replicate setup:
[Mostly a copy/paste from README
in xen-vhost-frontend
]
Note: These instructions assume:
you are using an x86_64 based build machine and are running Linux (either directly or in a VM).
you are building in Debian 12 (bookworm) directly or via a container
The current version of rustc at the time this page was written was 1.80.1
Build distro setup
As root:
dpkg --add-architecture arm64
apt-get update -qq
# for xen (basic) and kernel
apt-get install -yqq build-essential git bison flex wget curl \
bc libssl-dev libncurses-dev python3 python3-setuptools iasl
# for Xen cross compile
apt-get install -yqq gcc-aarch64-linux-gnu uuid-dev:arm64 libzstd-dev \
libncurses-dev:arm64 libyajl-dev:arm64 zlib1g-dev:arm64 \
libfdt-dev:arm64 libpython3-dev:arm64
# more for qemu
apt-get install python3-pip python3-venv ninja-build libglib2.0 \
libpixman-1-dev libslirp-dev
As the build user (can be root or yourself etc):
Key components
Xen
Branch:
master
Commit:
35f3afc42910c7cc6d7cd7083eb0bbdc7b4da406
(You can use upstream Xen as well.)
Enable following config options (top commit in my branch):
Build as:
Building the xen-vhost-frontend
binary
Branch:
virtio-msg
Commit: de22910cf2d8ff088d7d560b73d93f9121c832cf
Build as:
Generated binary: target/aarch64-unknown-linux-gnu/release/xen-vhost-frontend
Building the vhost-device-i2c
binary
Branch: main
Commit:
079d9024be604135ca2016e2bc63e55c013bea39
These are Rust based hypervisor-agnostic `vhost-user` backends, maintained inside the rust-vmm project.
Build as:
Generated binary: target/aarch64-unknown-linux-gnu/release/vhost-device-i2c
If you get a linking error saying “wrong file format
”, it’s possible the correct linker is not detected. Check that the ~/.cargo/config.toml configuration described above in the build user setup section was done. (You can also specify it by prepending env RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc"
to the cargo
command.)
Linux Kernel, guest and host
URL:
git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git
Branch:
virtio/msg-v1
Commit: 1e5e683a3d1aa8b584f279edd144b4b1d5aad45c
Build the kernel for aarch64 and you will get host kernel's image.
Repeat the same without the top commit to get image for the guest.
The only difference is that it is adding buildroot
path for the guest.
Build host kernel at the branch tip commit
Run make ARCH=arm64 defconfig
to get a basic .config
setup, then make -j$(nproc) ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
to build the Host kernel. Disabling graphics driver by overriding CONFIG_DRM=n
in the .config
file might make your build faster.
Copy the host kernel image somewhere else before building the guest image. It is located at arch/arm64/boot/Image
.
Build guest kernel at the tip’s previous commit
Do a git checkout HEAD^
to go back to the previous commit. The build will be the same except now we will add a rootfs at the guest kernel so that we can run it in Xen with ease.
Run make ARCH=arm64 defconfig
to get a basic .config
setup again. Open the file in an editor, locate the line with the variable CONFIG_INITRAMFS_SOURCE
and change its value to the location of the rootfs
cpio
file. Repeat build. Copy arch/arm64/boot/Image
somewhere else for convenience. This file will need go into the emulated host along with the vhost binaries we compiled.
Custom QEMU for system model
Branch: master (at the time)
Commit:
b7890a2c3d6949e8f462bb3630d5b48ecae8239f
This build of QEMU is necessary to use the I2C device with the argument -device ds1338,address=0x20
as described later in the document.
Build as:
Testing
The following steps lets one test I2C vhost-device
on Xen.
Putting the pieces together:
Now that we have built everything we need, we need to assemble the pieces. We will be using a Debian 12 arm64 root filesystem and adding or components to it.
There are many was to add the content to the disk but here we will use guestfish.
Run Xen via Qemu on X86 machine:
The ds1338
entry here is required to create a virtual I2C based RTC device on Dom0.
This should get Dom0 up and running.
Setup I2C based RTC devices on Dom0
This is required to control the device on Dom0 from the guest instead of the host.
Lets run everything
First start the I2C backend in the background.
This tells the I2C backend to hook up to /root/i2c.sock0
socket and wait for the master to start transacting.
The I2C controller used here on Dom0 is named 90c0000.i2c
(can be read from /sys/bus/i2c/devices/i2c-0/name
) and 32
here matches the device on I2C bus set in the previous commands (0x20
).
Setup dom0
and Xen services:
Then start xen-vhost-frontend
in the background, by providing path of the socket to the master side. This by default will create grant-mapping for the memory regions (buffers mapped on the fly).
Now that all the preparations are done, lets start the guest.
The guest kernel should have Virtio related config options enabled, along with i2c-virtio
driver.
The guest should boot now. Once the guest is up, you can create the I2C based RTC device and use it.
Following will create /dev/rtc0
in the guest, which you can configure with the standard hwclock
utility.
Sample domu.conf
The device type here defines the device to be emulated on the guest. The type value is set with the DT `compatible` string of the device.
For example, it is virtio,device22
for I2C.
Hopefully this should be enough to replicate the setup at your end.
Thanks.
--
viresh