2024-11 kernel prototype with FFA
This work is a stepping stone toward other forms of virtio-msg.
It leverages work done in prior projects to unblock the kernel work on the common virtio-msg transport layer.
This uses the work done previously in2024-08 kernel prototype to implement a real async transport. The previous prototype explains the virtio-msg-mmio setup in detail.
It assumes that the driver side kernel can put buffers anywhere it chooses to and uses Guest PA in virtqueues.
This setup ca nbe used to test Virtio-msg between backend running at Xen dom0 and frontend running at Guest kernel in domU.
We had a test setup (from ORKO days), where we used to test Rust based virtio backends with Xen. The same setup is used to test FFA transport here with Bertrand’s Xen FFA changes.
Steps to replicate setup:
[Mostly a copy/paste from README
in xen-vhost-frontend
]
Note: These instructions assume:
you are using an x86_64 based build machine and are running Linux (either directly or in a VM).
you are building in Debian 12 (bookworm) directly or via a container
The current version of rustc at the time this page was written was 1.80.1
Build distro setup
As root:
dpkg --add-architecture arm64
apt-get update -qq
# for xen (basic) and kernel
apt-get install -yqq build-essential git bison flex wget curl \
bc libssl-dev libncurses-dev python3 python3-setuptools iasl
# for Xen cross compile
apt-get install -yqq gcc-aarch64-linux-gnu uuid-dev:arm64 libzstd-dev \
libncurses-dev:arm64 libyajl-dev:arm64 zlib1g-dev:arm64 \
libfdt-dev:arm64 libpython3-dev:arm64
# more for qemu
apt-get install python3-pip python3-venv ninja-build libglib2.0 \
libpixman-1-dev libslirp-dev
As the build user (can be root or yourself etc):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
. ~/.cargo/env
rustup target add aarch64-unknown-linux-gnu
echo -e '[target.aarch64-unknown-linux-gnu]\nlinker = "aarch64-linux-gnu-gcc"' >>~/.cargo/config.toml
Key components
Xen
URL: https://gitlab.com/xen-project/people/bmarquis/xen-ffa-research/
Branch:
ffa-virtio/vm-to-vm
Commit:
2770c186f551d85bef4a4df311aa91c9d5083cd1
Enable following config options (top commit in this branch), this image of Xen can be used for both virtio-msg FFA and MMIO tests:
diff --git a/xen/arch/arm/configs/arm64_defconfig b/xen/arch/arm/configs/arm64_defconfig
index e69de29bb2d1..19aec50c3337 100644
--- a/xen/arch/arm/configs/arm64_defconfig
+++ b/xen/arch/arm/configs/arm64_defconfig
@@ -0,0 +1,7 @@
+CONFIG_IOREQ_SERVER=y
+CONFIG_EXPERT=y
+CONFIG_TESTS=y
+CONFIG_TEE=y
+CONFIG_UNSUPPORTED=y
+CONFIG_FFA=y
+CONFIG_FFA_VM_TO_VM=y
Build as:
./configure --libdir=/usr/lib --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu \
--disable-docs --disable-golang --disable-ocamltools \
--with-system-qemu=/root/qemu/build/i386-softmmu/qemu-system-i386
make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64
Building the xen-vhost-frontend
binary
Branch:
virtio-msg
Commit:
c2ceaeea8f7fa32c37447cc87d2b133783da6d94
Build as:
cargo build --bin xen-vhost-frontend --release --all-features --target aarch64-unknown-linux-gnu
Generated binary: target/aarch64-unknown-linux-gnu/release/xen-vhost-frontend
(This binary can be used for both virtio-msg FFA and MMIO tests.)
Building the vhost-device-i2c
binary
Branch: main
Commit:
079d9024be604135ca2016e2bc63e55c013bea39
These are Rust based hypervisor-agnostic `vhost-user` backends, maintained inside the rust-vmm project.
Build as:
cargo build --bin vhost-device-i2c --release --all-features --target aarch64-unknown-linux-gnu
Generated binary: target/aarch64-unknown-linux-gnu/release/vhost-device-i2c
If you get a linking error saying “wrong file format
”, it’s possible the correct linker is not detected. Check that the ~/.cargo/config.toml configuration described above in the build user setup section was done. (You can also specify it by prepending env RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc"
to the cargo
command.)
Linux Kernel, guest and host
URL:
git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git
Branch:
virtio/msg-xen
Commit:
023d0cfab7c7c6f523608e9f19c64b6456576729
Build the kernel for aarch64 and you will get guest kernel's image.
Repeat the same without the top commit to get image for the host.
The only difference is that it is removing buildroot
path for the host.
Build guest kernel at the branch tip commit
Run make O=../barh64_guest ARCH=arm64 defconfig
to get a basic .config
setup. Open the file in an editor, locate the line with the variable CONFIG_INITRAMFS_SOURCE
and change its value to the location of the rootfs
cpio
file. Now domake -j$(nproc) O=../barh64_guest ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
to build the Guest kernel. Disabling graphics driver by overriding CONFIG_DRM=n
in the .config
file might make your build faster.
Building the guest kernel with buildroot
We will provide a small busybox
-based rootfs
to the guest kernel as an initramfs
image by using the buildroot
project. First, acquire buildbox
by either downloading a tarball from its website or by cloning it locally:
git clone https://gitlab.com/buildroot.org/buildroot.git
Follow the steps:
make menuconfig
Target Options → Target Architecture → select
aarch64
little endianTarget Options → Target Architecture Variant → select
cortex-a57
CPUFilesystem images → select
cpio
the root filesystem (for use as an initial RAM filesystem)we will leave every other value at their default, so select exit, which will prompt you to save the
.config
file.
make
will download all required tarballs and build the .cpio
image in output/images/rootfs.cpio
Build host kernel at the tip’s previous commit
Do a git checkout HEAD^
to go back to the previous commit. The build will be the same except now we will remove the rootfs for the host kernel so that we can run it in Xen with ease.
Run make O=../barh64_host ARCH=arm64 defconfig
to get a basic .config
setup again. Repeat build.withmake -j$(nproc) O=../barh64_host ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
.
DMA HAL details
The kernel has DMA HAL implemented for virtio-msg-ffa channel bus. The guest kernel maps a big enough area at boot time, which is later on used for all buffers (both allocated with dma_alloc_coherent() and kmalloc() (with bounce buffering)). This requires the guest DT to contain a reserved-memory area and a virtio_msg_ffa node with memory-region, like shown below:.
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
vram: vram@81000000 {
compatible = "restricted-dma-pool";
reg = <0x00000000 0x81000000 0 0x00800000>;
};
};
virtio_msg_ffa@0 {
compatible = "virtio,msg-ffa";
memory-region = <&vram>;
};
WARNING: This is not tested with Xen based setup as it doesn’t support memory sharing between VMs. This is added here only for keeping a track of how it works.
It is also possible to do mapping on the fly, no reserved-mem region. For that the following commit must be dropped: “virtio-msg: ffa: Add reserved mem support“. The virtio/msg-xen branch already reverts the reserved-mem commit.
Custom QEMU for system model
Branch: master (at the time)
Commit:
ba2525269d6e4d95fea5f7c5a74fda5e402eb4a1
This build of QEMU is necessary to use the I2C device with the argument -device ds1338,address=0x20
as described later in the document. This also enables support for GIC V3, which is required for Xen FFA support.
Build as:
git clone https://github.com/vireshk/qemu
mkdir -p build/qemu
mkdir -p build/qemu-install
cd build/qemu
../../qemu/configure \
--target-list="aarch64-softmmu" \
--prefix="$(cd ../qemu-install; pwd)" \
--enable-fdt --enable-slirp --enable-strip \
--disable-docs \
--disable-gtk --disable-opengl --disable-sdl \
--disable-dbus-display --disable-virglrenderer \
--disable-vte --disable-brlapi \
--disable-alsa --disable-jack --disable-oss --disable-pa
make -j10
make install
Testing
The following steps lets one test I2C vhost-device
on Xen.
Putting the pieces together:
Now that we have built everything we need, we need to assemble the pieces. We will be using a Debian 12 arm64 root filesystem and adding or components to it.
There are many was to add the content to the disk but here we will use guestfish.
mkdir -p build/disk
(cd build/disk; wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-nocloud-arm64.qcow2)
cp build/disk/debian-12-nocloud-arm64.qcow2 build/disk.qcow2
MODULES_TAR=$(ls -1 build/linux-install/modules-*.tar.gz)
DEB=$(cd xen/dist; ls -1 xen-*.deb)
guestfish --rw -a build/disk.qcow2 <<EOF
run
mount /dev/sda1 /
tar-in $MODULES_TAR / compress:gzip
upload build/vhost-device-i2c /root/vhost-device-i2c
upload build/xen-vhost-frontend /root/xen-vhost-frontend
upload xen/dist/$DEB /root/$DEB
upload guestKernelImage /root/guestKernelImage
upload domu.conf /root/domu.conf
EOF
If guestfish
is not available, as a last resort you can scp
the files while QEMU is running with scp -P 8022 file root@localhost:/root/
. To get an ssh login use ssh -p root@loclahost (Capitol P
for scp and lowercase p
for ssh.). The Debian nocloud image recommended at the next step has the default credentials root
and no password. You will need to install an OpenSSH server apt update && apt install -y openssh-server
then enable it by:
edit
/etc/ssh/sshd_config
and un-comment the lines:Port 22 AddressFamily any ListenAddress 0.0.0.0 ListenAddress :: HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key PermitRootLogin yes PasswordAuthentication yes PermitEmptyPasswords yes
service ssh restart
Run Xen via Qemu on X86 machine:
Debian qcow2
source: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-nocloud-arm64.qcow2
./build/qemu-install/bin/qemu-system-aarch64 -machine type=virt,virtualization=on,gic-version=3 -cpu cortex-a57 -serial mon:stdio \
-device virtio-net-pci,netdev=net0 -netdev user,id=net0,hostfwd=tcp::8022-:22 \
-drive file=./build/disk.qcow2,index=0,id=hd0,if=none,format=qcow2 \
-device virtio-scsi-pci -device scsi-hd,drive=hd0 \
-display none -m 8192 -smp 8 -kernel ./build/xen \
-append "dom0_mem=5G,max:5G dom0_max_vcpus=7 loglvl=all guest_loglvl=all" \
-device guest-loader,addr=0x49000000,kernel=./build/Image,bootargs="root=/dev/sda1 console=hvc0 earlyprintk=xen" \
-device ds1338,address=0x20
The ds1338
entry here is required to create a virtual I2C based RTC device on Dom0.
This should get Dom0 up and running.
Setup I2C based RTC devices on Dom0
This is required to control the device on Dom0 from the guest instead of the host.
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind
Lets run everything
First start the I2C backend in the background.
vhost-device-i2c -s /root/i2c.sock -c 1 -l "90c0000.i2c:32" &
This tells the I2C backend to hook up to /root/i2c.sock0
socket and wait for the master to start transacting.
The I2C controller used here on Dom0 is named 90c0000.i2c
(can be read from /sys/bus/i2c/devices/i2c-0/name
) and 32
here matches the device on I2C bus set in the previous commands (0x20
).
Setup dom0
and Xen services:
/etc/init.d/xencommons start
Then start xen-vhost-frontend
in the background, by providing path of the socket to the master side. This by default will create grant-mapping for the memory regions (buffers mapped on the fly).
xen-vhost-frontend --socket-path /root/ &
Now that all the preparations are done, lets start the guest.
The guest kernel should have Virtio related config options enabled, along with i2c-virtio
driver.
xl create -c domu.conf
The guest should boot now. Once the guest is up, you can create the I2C based RTC device and use it.
Following will create /dev/rtc0
in the guest, which you can configure with the standard hwclock
utility.
echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device
Sample domu.conf
kernel="/root/Image"
memory=512
vcpus=3
command="console=hvc0 earlycon=xenboot"
name="domu"
virtio = [ "type=virtio,device22, transport=mmio, grant_usage=1" ]
tee="ffa"
gic_version="V3"
The device type here defines the device to be emulated on the guest. The type value is set with the DT `compatible` string of the device.
For example, it is virtio,device22
for I2C or virtio,device13 for VSOCK.
Hopefully this should be enough to replicate the setup at your end.
Thanks.
--
viresh