Setting up Debian on a virtual QEMU Aarch64 machine
Test images are great and all but sometime you just want to have a normal distro running in your guest. Here are some semi up to date runes that are correct for Debian Bookworm (although Buster did work much the same way).
In the old days installing distros on Arm machines involved all sorts of special installers and kernels. However these days thanks to firmware standardisation you can simply plug in a virtual cdrom/usbkey and boot up the normal installer.
Selecting the proper Debian image to download
Depending on your needs, you might need either a custom installation or a ready to use image that boots directly to a root shell with a standard Debian installation.
If you need the latter, instead of the installer image you can download the nocloud variety of the Debian cloud qcow2 files. (The name is confusing, but it means that it does not run cloud account setup tools like cloud-init, and you can login as root without password). For example, the Debian Trixie release nocloud variant URL is https://cloud.debian.org/images/cloud/trixie/daily/latest/debian-13-nocloud-arm64-daily.qcow2
How do we boot the image?
The next step is to decide whether to boot with firmware or without (direct kernel boot).
With firmware: you can use pre-built EDK blobs that QEMU comes with. Distros also package it separately, for example on Debian the package is
qemu-efi-aarch64. QEMU will need two flash devices as command line arguments, onepflashdevice for the firmware blob and onepflashdevice for the EFI variable storage.Without firmware: you will need to pass the kernel image and initrd directly in the command line. That means you might need to extract them from the
qcow2file. With thenbdkernel driver, you can mount it like this:qemu-nbd --connect=/dev/nbd0 path/to/debian-nocloud.qcow2 mount /dev/nbd0p1 /tmp/somepoint cp /tmp/somepoint/boot/vmlinuz-6.5.0-4-arm64 ./vmlinuz-6.5.0-4-arm64 cp /tmp/somepoint/boot/initrd.img-6.5.0-4-arm64 ./initrd.img-6.5.0-4-arm64Then you can use those files as parameters to QEMU:
qemu-system-aarch64 \ -machine type=virt \ -cpu max \ -smp 8 \ -accel tcg \ -drive if=virtio,format=qcow2,file=./disk-deb13-nocloud-u1.qcow2 \ -device virtio-net-pci,netdev=unet \ -device virtio-scsi-pci \ -netdev user,id=unet,hostfwd=tcp::2222-:22 \ -serial mon:stdio \ -m 8192 \ -object memory-backend-memfd,id=mem,size=8G,share=on \ -kernel ./vmlinuz-6.5.0-4-arm64 \ -initrd ./initrd.img-6.5.0-4-arm64 \ -append "root=/dev/vda1 ro"
Block devices
The canonical block device for QEMU is QCOW2 files which are great for all sorts of things including snapshots and the ability to save and restore. However you can expose a raw block device as well (either spare disk, empty partition, or a virtual block device) and use that directly. In this case I’m using LVM on my host system so I can create a block device just for my guest:
lvcreate -L 30G -n bookworm-arm64 zen-ssd2Here I create a 30Gb disk call bookworm-arm64 which I’ve create on my spare ssd VG
Firmware
Firmware comes in two parts, one on each flash device. The first contains the actual firmware and the second is an area where the firmware can store important details about the system such as boot order and potentially other secure variables.
QEMU from the source tree comes with some pre-built blobs you can use. There are also packaged blobs from your distro. Pick one or the other as variables are likely to move around so you’ll want to make sure you stick to one.
You will need to ensure the blobs have been rounded to 64 Mb otherwise the pflash device will complain. The firmware packages on Debian already come so tweaked so:
➜ ls -lh /usr/share/AAVMF/AAVMF_VARS.fd
-rw-r--r-- 1 root root 64M Mar 5 2023 /usr/share/AAVMF/AAVMF_VARS.fd
🕙15:04:26 alex@draig:qemu.git on plugins/next [$!?]
➜ cp /usr/share/AAVMF/AAVMF_VARS.fd ~/images/qemu-arm64-efivars.testRun the installer
Most of the command line is the same as actually running the device except for the last two lines with attach the cdrom with the netinstall ISO. When you boot you may need to enter the UEFI config to ensure it boots off the CDROM.
./qemu-system-aarch64 \
-machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars \
-cpu cortex-a53 \
-smp 8 \
-accel tcg \
-device virtio-net-pci,netdev=unet \
-device virtio-scsi-pci \
-device scsi-hd,drive=hd \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-blockdev driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/bookworm-arm64,discard=unmap \
-serial mon:stdio \
-blockdev node-name=rom,driver=file,filename=(pwd)/pc-bios/edk2-aarch64-code.fd,read-only=true \
-blockdev node-name=efivars,driver=file,filename=$HOME/images/qemu-arm64-efivars \
-m 8192 \
-object memory-backend-memfd,id=mem,size=8G,share=on \
-display none \
-blockdev driver=raw,node-name=cdrom,file.driver=file,file.filename=/home/alex/Downloads/ISOs/debian-12.2.0-arm64-netinst.iso \
-device scsi-cd,drive=cdromA few quick notes:
Don’t format your guest with LVM, this will just make direct kernel booting a pain.
You’ll want to install the openssh server.
Install the GUI if you want but your not likely to use it much
Booting the system with graphics
All the differences are at the end, this time with enabling a GTK display with GL support and a usb keyboard and tablet for keyboard and mouse input.
./qemu-system-aarch64 \
-machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars \
-cpu max,pauth-impdef=on \
-smp 8 \
-accel tcg \
-device virtio-net-pci,netdev=unet \
-device virtio-scsi-pci \
-device scsi-hd,drive=hd \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-blockdev driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/bookworm-arm64,discard=unmap \
-serial mon:stdio \
-blockdev node-name=rom,driver=file,filename=(pwd)/pc-bios/edk2-aarch64-code.fd,read-only=true \
-blockdev node-name=efivars,driver=file,filename=$HOME/images/qemu-arm64-efivars \
-m 8192 \
-object memory-backend-memfd,id=mem,size=8G,share=on \
-device virtio-gpu-pci \
-device qemu-xhci -device usb-kbd -device usb-tablet \
-display gtk,gl=onBooting a custom kernel
One of the main reasons to have a test system is to test kernels. This is done by applying the -kernel and -initrd options. If no UEFI rom is supplied then you will boot direct, otherwise the UEFI firmware will pick up the kernel via the fw_cfg interface and pass it on instead of loading grub
./qemu-system-aarch64 \
-machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars \
-cpu max,pauth-impdef=on \
-smp 8 \
-accel tcg \
-device virtio-net-pci,netdev=unet \
-device virtio-scsi-pci \
-device scsi-hd,drive=hd \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-blockdev driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/bookworm-arm64,discard=unmap \
-serial mon:stdio \
-blockdev node-name=rom,driver=file,filename=(pwd)/pc-bios/edk2-aarch64-code.fd,read-only=true \
-blockdev node-name=efivars,driver=file,filename=$HOME/images/qemu-arm64-efivars \
-m 8192 \
-object memory-backend-memfd,id=mem,size=8G,share=on \
-display none \
-kernel /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image \
-append "root=/dev/sda2"Installing Xen
The best way is to build the current Xen you are interested on your host with a crossbuild and then:
make debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64and you can copy the resulting image to your emulated system and install it. Make sure you haven’t installed any of the disto bits. You may need to run update-grub manually to add the Xen entries.
While you probably don’t want to touch most of the tooling stuff you may well want to build your own QEMU so you can have the latest Xen enabling bits. We shall strip down the config as building inside QEMU will be slower than native:
git clone https://gitlab.com/qemu-project/qemu.git qemu.git
cd qemu.git
mkdir -p builds/xen
cd builds/xen
../../configure --disable-docs --disable-tools --disable-user --disable-tcg --disable-kvm
ninjaAnd finally tweak /etc/default/xencommons to point at it
# qemu path
QEMU_XEN=/root/lsrc/qemu.git/builds/xen/qemu-system-i386And you you can systemctl restart xencommons.service or reboot and you should be able to list Xen domains:
18:43:39 [root@debian-arm64:~/l/q/b/xen] + systemctl restart xencommons.service
18:43:45 [root@debian-arm64:~/l/q/b/xen] + systemctl status xencommons.service
● xencommons.service - LSB: Start/stop xenstored and xenconsoled
Loaded: loaded (/etc/init.d/xencommons; generated)
Active: active (running) since Mon 2023-12-18 18:43:45 GMT; 6s ago
Docs: man:systemd-sysv-generator(8)
Process: 15117 ExecStart=/etc/init.d/xencommons start (code=exited, status=0/SUCCESS)
Tasks: 9 (limit: 4659)
Memory: 33.0M
CPU: 501ms
CGroup: /system.slice/xencommons.service
├─ 1135 /usr/local/sbin/xenstored --pid-file /var/run/xen/xenstored.pid
├─ 1141 /usr/local/sbin/xenconsoled --pid-file=/var/run/xen/xenconsoled.pid
├─15140 /usr/local/sbin/xenconsoled --pid-file=/var/run/xen/xenconsoled.pid
└─15146 /root/lsrc/qemu.git/builds/xen/qemu-system-i386 -xen-domid 0 -xen-attach -name dom0 -nographic -M xenpv -daemonize -monitor /dev/null -serial /dev/null>
Dec 18 18:43:44 debian-arm64 systemd[1]: Starting xencommons.service - LSB: Start/stop xenstored and xenconsoled...
Dec 18 18:43:45 debian-arm64 xencommons[15117]: Setting domain 0 name, domid and JSON config...
Dec 18 18:43:45 debian-arm64 xencommons[15137]: Dom0 is already set up
Dec 18 18:43:45 debian-arm64 xencommons[15117]: Starting xenconsoled...
Dec 18 18:43:45 debian-arm64 xencommons[15117]: Starting QEMU as disk backend for dom0
Dec 18 18:43:45 debian-arm64 systemd[1]: Started xencommons.service - LSB: Start/stop xenstored and xenconsoled.
18:44:02 [root@debian-arm64:~/l/q/b/xen] 1 + xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 4096 8 r----- 6753.8 Booting Xen Directly
Once you have the user space tooling installed you can now boot the hypervisor directly and manually load the dom0 kernel. Please note you’ll want to skip the UEFI bios for this, also we downgrade the CPU as Xen doesn’t support SVE+ out of the box.
./qemu-system-aarch64 \
-machine type=virt,virtualization=on \
-cpu cortex-a57 \
-smp 8 \
-accel tcg \
-device virtio-net-pci,netdev=unet \
-device virtio-scsi-pci \
-device scsi-hd,drive=hd \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-blockdev driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/bookworm-arm64,discard=unmap \
-serial mon:stdio \
-m 8192 \
-object memory-backend-memfd,id=mem,size=8G,share=on \
-display none \
-kernel $HOME/lsrc/xen/xen.git/xen/xen.efi \
-append "dom0_mem=4G,max:4G loglvl=all guest_loglvl=all" \
-device guest-loader,addr=0x49000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="console=hvc0 earlyprintk=xen root=/dev/sda2"