Build and set up virtio-gpu with venus protocol on Xen
This document is a tentative summary of what I’m currently working on to evaluate virtio-gpu on Xen. So even if you follow the following instructions, you may not be able to enable virtio-gpu with venus protocol (gpu-accelarated Vulkan support) on Xen’s guest VM.
Please note that the WIP code referenced below may be subject to change without notice.
Target/Environment
Hardware Platform | AVA platform |
Xen | modified v4.18 |
Host OS(Dom0) | Ubuntu 23.10 + modified kernel (6.7) |
Guest OS (DomU) | Debian 12 + modified kernel (6.7) |
Build Xen
For some reason, I temporily use Xen binary (from xen-aosp banch?) that Leo built along with xen tools that I compiled from xen-aosp’s v4.18-rc2-xen-aosp branch[1] with the following patch.
From 0e107b8cec9ee26b8a4044561f6141cd666ceeec Mon Sep 17 00:00:00 2001
From: AKASHI Takahiro <takahiro.akashi@linaro.org>
Date: Wed, 13 Dec 2023 15:41:47 +0900
Subject: [PATCH] libxl: make use of device_model_args for xenpvh
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
---
tools/libs/light/libxl_dm.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index f0bceee6d804..b4772c811b2d 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1823,6 +1823,10 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
flexarray_append(dm_args, "-machine");
switch (b_info->type) {
case LIBXL_DOMAIN_TYPE_PVH:
+ flexarray_append(dm_args, "xenpvh");
+ for (i = 0; b_info->extra_pv && b_info->extra_pv[i] != NULL; i++)
+ flexarray_append(dm_args, b_info->extra_pv[i]);
+ break;
case LIBXL_DOMAIN_TYPE_PV:
flexarray_append(dm_args, "xenpv");
for (i = 0; b_info->extra_pv && b_info->extra_pv[i] != NULL; i++)
--
2.40.1
Build and install xen (xen-tools) as follows:
$ ./configure --disable-docs --disable-golang --disable-ocamltools \
--enable-ioreq-server \
// --with-system-qemu=/home/akashi/.local/bin/qemu-system-aarch64
$ make debball
$ sudo dpkg -i dist/xen-uppstream-4.18-rc.deb
NOTE: Under the environment with the setup described below, dom0 still boots with qemu-system-i386 while a guest will boot with qemu-system-aarch64.
Check /etc/default/xencommons
and confirm that QEMU_XEN
points to your local qemu ("/home/akashi/.local/bin/qemu-system-i386
") that was built above.
Modify /etc/init.d/xencommons
, adding "LD_LIBRARY_PATH=/home/akashi/.local/lib/aarrch64-lnux-gnu
" and replacing qemu-system-aarch64 with qemu-system-i386.
Modify /boot/grub/grub.cfg
, adding "dom0_mem=xxxM
" at xen_hypervisor command.
Enable daemons:
$ sudo systemctrl enable xencommons
$ sudo systemctrl enable xendomains
$ sudo systemctrl enable xendriverdomain
($ sudo systemctrl enable xen-watchdog)
[1] https://gitlab.com/Linaro/blueprints/automotive/xen-aosp/xen.git
Build Lnux kernel (host)
Use my current repository[2].
T.B.D.
[2] https://github.com/t-akashi/linux.git branch: virtio-gpu/v67_rui2_digetx4
(As of today, I fail to push the branch above to github.) Instead use:
https://git.linaro.org/people/takahiro.akashi/linux-aarch64.git branch: virtio-gpu/v67_rui2_digetx4
Build virglrenderer library (host)
Use the latest upstrem master branch and apply the tweak below.
Then configure and build the library as follows:
Build mesa library (host, guest)
Use the latest upstream master branch.
On the host,
On the guest,
Build the library as follows:
Build qemu (host)
Use my current repository[3].
Configure and build qemu as follows (assuming my own mesa/virglrenderer installed in my local dir):
[3] https://github.com/t-akashi/qemu.git branch: virtio-gpu/aosp_vv82_digetx_rui.2
Create and start Xen guest VM
Use the following xen configuration file.
“bdf” option in “virtio” parameter must be modified for your platform/GPU card.
Then,