Skip to end of banner
Go to start of banner

2024-08 kernel prototype

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

This work is a stepping stone toward other forms of virtio-msg.

It leverages work done in prior projects to unblock the kernel work on the common virtio-msg transport layer.

  • The virtio-msg-bus level here is not intended for real implementations as it has no advantage over virtio-mmio.

  • This bus implementation relies on any response being available immediately after sending, which is only possible in a trap and emulate environment

  • It also assumes that the driver side kernel can put buffers anywhere it chooses to and uses Guest PA in virtqueues.

These items will be addressed in later work.

From Viresh Kumar Aug 28, 2024

I have a working setup now where we can test Virtio-msg between backend running at Xen dom0 and frontend running at Guest kernel in domU.

We had a test setup (from ORKO days), where we used to test Rust based virtio backends with Xen.

A brief history about that first: QEMU understands MMIO and can trap Virtio-MMIO accesses and talk to backend based on that, but Xen doesn't. For this we created `xen-vhost-frontent`, which is the entity that understands Virtio-MMIO protocol and runs on dom0. The call chain from frontend driver (I2c for example) to backend is:

Guest kernel I2c Driver (domU) -> 
Guest kernel virtio-mmio layer (domU) ->
Xen-vhost-frontend (traps guest memory access with Xen IOREQ) (dom0) ->
Rust based I2c-Backend (dom0)

I imagined that maybe we can just replace `Guest kernel virtio-mmio layer (domU)` with `Guest kernel virtio-msg layer (domU)` and still use memory-trapping to test the virtio-msg layer properly.

The call chain is like this now:

Guest kernel I2c Driver (domU) -> Guest kernel virtio-msg layer (domU) ->
Guest kernel virtio-msg-mmio layer (domU) ->
Xen-vhost-frontend (traps guest memory access with Xen IOREQ) (dom0) ->
Rust based I2c-Backend (dom0)

My test setup (explained below) does exactly that and it works. Now we can just add another layer for Virtio-FFA (will replace `Guest kernel virtio-msg-mmio layer (domU)` in the above call chain) and it should just work ? πŸ™‚

Steps to replicate setup:

[Mostly a copy/paste from README in xen-vhost-frontend]

Note: These instructions assume:

  • you are using an x86_64 based build machine and are running Linux (either directly or in a VM).

  • you are building in Debian 12 (bookworm) directly or via a container

  • The current version of rustc at the time this page was written was 1.80.1

Build distro setup

As root:

dpkg --add-architecture arm64
apt-get update -qq
apt-get install -yqq build-essential git bison flex wget curl \
    python3 python3-setuptools iasl
apt-get install -yqq gcc-aarch64-linux-gnu uuid-dev:arm64 libzstd-dev \
    libncurses-dev:arm64 libyajl-dev:arm64 zlib1g-dev:arm64 \
    libfdt-dev:arm64 libpython3-dev:arm64

As the build user (can be root or yourself etc):

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
. ~/.cargo/env
rustup target add aarch64-unknown-linux-gnu

Key components

Xen

(You can use upstream Xen as well.)

Enable following config options (top commit in my branch):

diff --git a/xen/arch/arm/configs/arm64_defconfig b/xen/arch/arm/configs/arm64_defconfig
index e69de29bb2d1..38ca05a8b416 100644
--- a/xen/arch/arm/configs/arm64_defconfig
+++ b/xen/arch/arm/configs/arm64_defconfig
@@ -0,0 +1,3 @@
+CONFIG_IOREQ_SERVER=y
+CONFIG_EXPERT=y

Build as:

./configure --libdir=/usr/lib --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu \
  --disable-docs --disable-golang --disable-ocamltools \
  --with-system-qemu=/root/qemu/build/i386-softmmu/qemu-system-i386

make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64

Building the xen-vhost-frontend binary

Build as:

cargo build --bin xen-vhost-frontend --release --all-features --target aarch64-unknown-linux-gnu

Generated binary: target/aarch64-unknown-linux-gnu/release/xen-vhost-frontend

If you get a linking error saying β€œwrong file format”, it’s possible the correct linker is not detected; specify it by prepending env RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc" to the cargo command

Building the vhost-device-i2c binary

These are Rust based hypervisor-agnostic `vhost-user` backends, maintained inside the rust-vmm project.

Build as:

cargo build --bin vhost-device-i2c --release --all-features --target aarch64-unknown-linux-gnu

Generated binary: target/aarch64-unknown-linux-gnu/release/vhost-device-i2c

If you get a linking error saying β€œwrong file format”, it’s possible the correct linker is not detected; specify it by prepending env RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc" to the cargo command

Linux Kernel, guest and host

  • URL: git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git

  • Branch: virtio/msg

Build the kernel for aarch64 and you will get host kernel's image.

Repeat the same without the top commit to get image for the guest.

The only difference is that it is adding buildroot path for the guest.

Custom QEMU for system model

This QEMU has support for I2C needed below.

Build as:

git clone https://github.com/vireshk/qemu
mkdir -p build/qemu
mkdir -p build/qemu-install
cd build/qemu
../../qemu/configure \
    --target-list="aarch64-softmmu" \
    --prefix="$(cd ../qemu-install; pwd)" \
    --enable-fdt --enable-slirp --enable-strip \
    --disable-docs \
    --disable-gtk --disable-opengl --disable-sdl \
    --disable-dbus-display --disable-virglrenderer \
    --disable-vte --disable-brlapi \
    --disable-alsa --disable-jack --disable-oss --disable-pa
  make -j10
  make install

Testing

The following steps lets one test I2C vhost-device on Xen.

Run Xen via Qemu on X86 machine:

./build/qemu-install/qemu-system-aarch64 -machine virt,virtualization=on -cpu cortex-a57 -serial mon:stdio \
  -device virtio-net-pci,netdev=net0 -netdev user,id=net0,hostfwd=tcp::8022-:22 \
  -drive file=/home/debian-bullseye-arm64.qcow2,index=0,id=hd0,if=none,format=qcow2 \
  -device virtio-scsi-pci -device scsi-hd,drive=hd0 \
  -display none -m 8192 -smp 8 -kernel /home/xen/xen \
  -append "dom0_mem=5G,max:5G dom0_max_vcpus=7 loglvl=all guest_loglvl=all" \
  -device guest-loader,addr=0x49000000,kernel=/home/Image,bootargs="root=/dev/sda1 console=hvc0 earlyprintk=xen" \
  -device ds1338,address=0x20

The ds1338 entry here is required to create a virtual I2C based RTC device on Dom0.

This should get Dom0 up and running.

Setup I2C based RTC devices on Dom0

This is required to control the device on Dom0 from the guest instead of the host.

echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind

Lets run everything

First start the I2C backend.

vhost-device-i2c -s /root/i2c.sock -c 1 -l `90c0000.i2c:32`

This tells the I2C backend to hook up to /root/i2c.sock0 socket and wait for the master to start transacting.

The I2C controller used here on Dom0 is named 90c0000.i2c (can be read from /sys/bus/i2c/devices/i2c-0/name) and 32 here matches the device on I2C bus set in the previous commands (0x20).

Setup dom0 and Xen services:

/etc/init.d/xencommons start

Then start xen-vhost-frontend, by providing path of the socket to the master side. This by default will create grant-mapping for the memory regions (buffers mapped on the fly).

xen-vhost-frontend --socket-path /root/

Now that all the preparations are done, lets start the guest.

The guest kernel should have Virtio related config options enabled, along with i2c-virtio driver.

xl create -c domu.conf

The guest should boot now. Once the guest is up, you can create the I2C based RTC device and use it.

Following will create /dev/rtc0 in the guest, which you can configure with the standard hwclock utility.

echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device

Sample domu.conf

kernel="/root/Image"
memory=512
vcpus=3
command="console=hvc0 earlycon=xenboot"
name="domu"
virtio = [ "type=virtio,device22, transport=mmio, grant_usage=1" ]

The device type here defines the device to be emulated on the guest. The type value is set with the DT `compatible` string of the device.

For example, it is virtio,device22 for I2C.


Hopefully this should be enough to replicate the setup at your end.

Thanks.

--

viresh

  • No labels