...
Code Block |
---|
dpkg --add-architecture arm64 apt-get update -qq # for xen (basic) and kernel apt-get install -yqq build-essential git bison flex wget curl \ bc libssl-dev libncurses-dev python3 python3-setuptools iasl # for Xen cross compile apt-get install -yqq gcc-aarch64-linux-gnu uuid-dev:arm64 libzstd-dev \ libncurses-dev:arm64 libyajl-dev:arm64 zlib1g-dev:arm64 \ libfdt-dev:arm64 libpython3-dev:arm64 # more for qemu apt-get install python3-pip python3-venv ninja-build libglib2.0 \ libpixman-1-dev libslirp-dev |
As the build user (can be root or yourself etc):
Code Block |
---|
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y . ~/.cargo/env rustup target add aarch64-unknown-linux-gnu echo -e '[target.aarch64-unknown-linux-gnu]\nlinker = "aarch64-linux-gnu-gcc"' >>~/.cargo/config.toml |
Key components
Xen
Branch:
master
Commit:
35f3afc42910c7cc6d7cd7083eb0bbdc7b4da406
(You can use upstream Xen as well.)
...
Branch:
virtio-msg
Commit: de22910cf2d8ff088d7d560b73d93f9121c832cf
Build as:
Code Block |
---|
cargo build --bin xen-vhost-frontend --release --all-features --target aarch64-unknown-linux-gnu |
Generated binary: target/aarch64-unknown-linux-gnu/release/xen-vhost-frontend
If you get a linking error saying “wrong file format
”, it’s possible the correct linker is not detected; specify it by prepending env RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc"
to the cargo
command
Building the vhost-device-i2c
binary
Branch: main
Commit:
079d9024be604135ca2016e2bc63e55c013bea39
These are Rust based hypervisor-agnostic `vhost-user` backends, maintained inside the rust-vmm project.
...
If you get a linking error saying “wrong file format
”, it’s possible the correct linker is not detected; . Check that the ~/.cargo/config.toml configuration described above in the build user setup section was done. (You can also specify it by prepending env RUSTFLAGS="-C linker=aarch64-linux-gnu-gcc"
to the cargo
command.)
Linux Kernel, guest and host
URL:
git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git
Branch:
virtio/msg-v1
Commit: 1e5e683a3d1aa8b584f279edd144b4b1d5aad45c
Build the kernel for aarch64 and you will get host kernel's image.
...
The only difference is that it is adding buildroot
path for the guest.
Build host kernel at the branch tip commit
Run make ARCH=arm64 defconfig
to get a basic .config
setup, then make -j$(nproc) ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
to build the Host kernel. Disabling graphics driver by overriding CONFIG_DRM=n
in the .config
file might make your build faster.
Copy the host kernel image somewhere else before building the guest image. It is located at arch/arm64/boot/Image
.
Building the guest kernel with buildroot
We will provide a small busybox
-based rootfs
to the guest kernel as an initramfs
image by using the buildroot
project. First, acquire buildbox
by either downloading a tarball from its website or by cloning it locally:
Code Block |
---|
git clone https://gitlab.com/buildroot.org/buildroot.git |
Follow the steps:
Code Block |
---|
make menuconfig |
Target Options → Target Architecture → select
aarch64
little endianTarget Options → Target Architecture Variant → select
cortex-a57
CPUFilesystem images → select
cpio
the root filesystem (for use as an initial RAM filesystem)we will leave every other value at their default, so select exit, which will prompt you to save the
.config
file.
Code Block |
---|
make |
will download all required tarballs and build the .cpio
image in output/images/rootfs.cpio
View file | ||
---|---|---|
|
View file | ||
---|---|---|
|
Build guest kernel at the tip’s previous commit
Do a git checkout HEAD^
to go back to the previous commit. The build will be the same except now we will add a rootfs at the guest kernel so that we can run it in Xen with ease.
Run make ARCH=arm64 defconfig
to get a basic .config
setup again. Open the file in an editor, locate the line with the variable CONFIG_INITRAMFS_SOURCE
and change its value to the location of the rootfs
cpio
file. Repeat build. Copy arch/arm64/boot/Image
somewhere else for convenience. This file will need go into the emulated host along with the vhost binaries we compiled.
Custom QEMU for system model
Branch: master (at the time)
Commit:
b7890a2c3d6949e8f462bb3630d5b48ecae8239f
This build of QEMU is necessary to use the I2C device with the argument -device ds1338,address=0x20
as described later in the document.
Build as:
Code Block |
---|
git clone https://github.com/vireshk/qemu
mkdir -p build/qemu
mkdir -p build/qemu-install
cd build/qemu
../../qemu/configure \
--target-list="aarch64-softmmu" \
--prefix="$(cd ../qemu-install; pwd)" \
--enable-fdt --enable-slirp --enable-strip \
--disable-docs \
--disable-gtk --disable-opengl --disable-sdl \
--disable-dbus-display --disable-virglrenderer \
--disable-vte --disable-brlapi \
--disable-alsa --disable-jack --disable-oss --disable-pa
make -j10
make install |
Testing
The following steps lets one test I2C vhost-device
on Xen.
Putting the pieces together:
Now that we have built everything we need, we need to assemble the pieces. We will be using a Debian 12 arm64 root filesystem and adding or components to it.
There are many was to add the content to the disk but here we will use guestfish.
Code Block |
---|
mkdir -p build/disk
(cd build/disk; wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-nocloud-arm64.qcow2)
cp build/disk/debian-12-nocloud-arm64.qcow2 build/disk.qcow2
MODULES_TAR=$(ls -1 build/linux-install/modules-*.tar.gz)
DEB=$(cd xen/dist; ls -1 xen-*.deb)
guestfish --rw -a build/disk.qcow2 <<EOF
run
mount /dev/sda1 /
tar-in $MODULES_TAR / compress:gzip
upload build/vhost-device-i2c /root/vhost-device-i2c
upload build/xen-vhost-frontend /root/xen-vhost-frontend
upload xen/dist/$DEB /root/$DEB
upload guestKernelImage /root/guestKernelImage
upload domu.conf /root/domu.conf
EOF |
If guestfish
is not available, as a last resort you can scp
the files while QEMU is running with scp -P 8022 file root@localhost:/root/
. To get an ssh login use ssh -p root@loclahost (Capitol P
for scp and lowercase p
for ssh.). The Debian nocloud image recommended at the next step has the default credentials root
and no password. You will need to install an OpenSSH server apt update && apt install -y openssh-server
then enable it by:
edit
/etc/ssh/sshd_config
and un-comment the lines:Code Block Port 22 AddressFamily any ListenAddress 0.0.0.0 ListenAddress :: HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key PermitRootLogin yes PasswordAuthentication yes PermitEmptyPasswords yes
service ssh restart
Run Xen via Qemu on X86 machine:
Debian qcow2
source: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-nocloud-arm64.qcow2
Code Block |
---|
./build/qemu-install/bin/qemu-system-aarch64 -machine virt,virtualization=on -cpu cortex-a57 -serial mon:stdio \ -device virtio-net-pci,netdev=net0 -netdev user,id=net0,hostfwd=tcp::8022-:22 \ -drive file=./home/debian-bullseye-arm64build/disk.qcow2,index=0,id=hd0,if=none,format=qcow2 \ -device virtio-scsi-pci -device scsi-hd,drive=hd0 \ -display none -m 8192 -smp 8 -kernel ./home/xenbuild/xen \ -append "dom0_mem=5G,max:5G dom0_max_vcpus=7 loglvl=all guest_loglvl=all" \ -device guest-loader,addr=0x49000000,kernel=./homebuild/Image,bootargs="root=/dev/sda1 console=hvc0 earlyprintk=xen" \ -device ds1338,address=0x20 |
...
First start the I2C backend in the background.
Code Block |
---|
vhost-device-i2c -s /root/i2c.sock -c 1 -l `90c0000"90c0000.i2c:32`32" & |
This tells the I2C backend to hook up to /root/i2c.sock0
socket and wait for the master to start transacting.
...
Then start xen-vhost-frontend
in the background, by providing path of the socket to the master side. This by default will create grant-mapping for the memory regions (buffers mapped on the fly).
Code Block |
---|
xen-vhost-frontend --socket-path /root/ & |
Now that all the preparations are done, lets start the guest.
...