Skip to end of banner
Go to start of banner

Building an RME stack for QEMU

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 39 Next »

The whole software stack for CCA is in development, meaning instructions will change frequently and repositories are temporary.

With the OP-TEE build environment

This method requires at least the following tools and libraries. The manual build described below also requires most of them.

  • repo

  • python3-pyelftools, python3-venv

  • acpica-tools

  • openssl (debian libssl-dev)

  • libglib2.0-dev, libpixman-1-dev

  • dtc (debian device-tree-compiler)

  • flex, bison

  • make, cmake, ninja (debian ninja-build), curl, rsync

The easiest way to build and run a complete stack is through OP-TEE. The following commands will download all components and build them, in about thirty minutes on a fast machine.

mkdir v1.0-eac5
cd v1.0-eac5
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee/manifest.git -b v1.0-eac5
 -m qemu_v8_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 CCA_SUPPORT=y toolchains
make -j8 CCA_SUPPORT=y

Note: if the build fails, try without -j. It will point out missing dependencies and work around a possible issue with the edk2 build.

Images can be found in under v1.0-eac5/out/ and v1.0-eac5/out-br/. The following command launches system emulation QEMU with the RME feature enabled, running TF-A, RMM and the Linux host.

make CCA_SUPPORT=y run-only

This should launch 4 new terminals, i.e Firmware, Host, Secure and Realm. Output from the boot process will start flowing in the Firmware terminal followed by the Host terminal. The build environment automatically makes the v1.0-eac5 directory available to the host VM via 9p.

Read on for the details of the software stack, or skip to the following section to boot a Realm guest.

Manual build

The following sections detail how to build and run all components of the CCA software stack. Two QEMU binaries are built. The system emulation QEMU implements a complete machine, emulating Armv9 CPUs with FEAT_RME and four security states: Root, Secure, Non-secure and Realm. The VMM (Virtual Machine Manager) QEMU is cross-built by buildroot, and launches the realm guest from Non-secure EL0.

       |    REALM     |  NON-SECURE  |
-------+--------------+--------------+
  EL0  | Guest Rootfs |  Host Rootfs |
       |              |  QEMU VMM    |
-------+--------------+--------------+
  EL1  |        EDK2  |              |
       | Linux Guest  |              |
       |              |  EDK2        |
-------+--------------+  Linux Host  |
  EL2  |      TF-RMM  |    (KVM)     |
       |              |              |
-------+--------------+--------------+
 (ROOT)|                             |
  EL3  |            TF-A             |
-------+-----------------------------+
  HW   |            QEMU             |
-------+-----------------------------+

TF-RMM

The Realm Management Monitor (RMM) connects KVM and the Realm guest.

RMM gets loaded into NS DRAM (because there isn't enough space in Secure RAM). TF-A carves out 24MB of memory for the RMM (0x40100000-0x418fffff on the virt platform), and tells other software about it using a device-tree reserved memory node.

Status: QEMU support has been merged. Additional patches are needed until QEMU supports a couple features that are mandatory for RME (PMUv3p7 and ECV).

Repo: extra patches are at https://git.codelinaro.org/linaro/dcap/rmm branch rmm-v1.0-eac5
official repo is https://git.trustedfirmware.org/TF-RMM/tf-rmm.git/

Build:

git submodule update --init --recursive
export CROSS_COMPILE=aarch64-none-elf-
cmake -DCMAKE_BUILD_TYPE=Debug -DRMM_CONFIG=qemu_virt_defcfg -B build-qemu
cmake --build build-qemu

Host EDK2

Edk2 is the firmware used in non-secure world. It works out of the box. However, we rely on edk2 not allocating memory from the DRAM area reserved for the RMM at the moment, which is fragile. Future work will add support for the reserved memory node provided by TF-A in the device-tree.

Repo: https://github.com/tianocore/edk2.git or the same repo and branch as Guest edk2 below.

Build:

git submodule update --init --recursive
source edksetup.sh
make -j -C BaseTools
export GCC5_AARCH64_PREFIX=aarch64-linux-gnu-
build -b RELEASE -a AARCH64 -t GCC5 -p ArmVirtPkg/ArmVirtQemuKernel.dsc

TF-A

TF-A loads the RMM as well as the Non-secure firmware, and bridges RMM and KVM. It also owns the Granule Protection Table (GPT).

Status: QEMU support is currently under review.

Repo: currently at https://git.codelinaro.org/linaro/dcap/tf-a/trusted-firmware-a branch v1.0-eac5
official is https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/

Build:

# Embed the RMM image and edk2 into the Final Image Package (FIP)
make -j CROSS_COMPILE=aarch64-linux-gnu- PLAT=qemu ENABLE_RME=1 DEBUG=1 LOG_LEVEL=40 \
    QEMU_USE_GIC_DRIVER=QEMU_GICV3 RMM=../rmm/build-qemu/Debug/rmm.img \
    BL33=../edk2/Build/ArmVirtQemuKernel-AARCH64/RELEASE_GCC5/FV/QEMU_EFI.fd all fip
# Pack whole image into flash.bin
dd if=build/qemu/debug/bl1.bin of=flash.bin
dd if=build/qemu/debug/fip.bin of=flash.bin seek=64 bs=4096

Host and guest Linux

Both host and guest need extra patches.

Status: https://lore.kernel.org/linux-arm-kernel/20231002124311.204614-1-suzuki.poulose@arm.com/

Repo: https://gitlab.arm.com/linux-arm/linux-cca cca-full/rmm-v1.0/rfc-v2

Build:

make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 defconfig
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 -j8

Guest edk2

The QEMU VMM can either launch the guest kernel itself, or launch edk2 which launches the kernel or an intermediate bootloader. That latter method is generally used to boot a Linux distribution. Edk2 needs modifications in order to run as a Realm guest.

Status: in development. Only the ArmVirtQemu firwmare supports booting in a Realm at the moment, not ArmVirtQemuKernel.

Repo: https://git.codelinaro.org/linaro/dcap/edk2 branch rmm-v1.0-eac5

Build:

git submodule update --init --recursive
source edksetup.sh
make -j -C BaseTools
export GCC5_AARCH64_PREFIX=aarch64-linux-gnu-
build -b DEBUG -a AARCH64 -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc

Note that the DEBUG build is very verbose (even with a few patches that remove repetitive messages), which is extremely slow in a nesting environment with emulated UART. Change it to -b RELEASE to speed up the guest boot.

QEMU VMM

Both kvmtool and QEMU can be used to launch Realm guests. For details about kvmtool, see the cover letter for the Linux support above.

Status: in development

Repo: for now https://git.codelinaro.org/linaro/dcap/qemu branch cca/rmm-v1.0/rfc-v2

Build:

# Although it is buildroot that builds the VMM from this source directory,
# the following is needed to first download all the submodules
./configure --target-list=aarch64-softmmu

Root filesystem

Buildroot provides a convenient way to build lightweight root filesystems. It can also embed the VMM into the rootfs if you specify the path to kvmtool or QEMU source in a local.mk file in the build directory.

Repo: https://gitlab.com/buildroot.org/buildroot.git
Use the master branch to have up-to-date recipes for building QEMU.

Create local.mk (at the root of the source directory, or in the build directory when building out of tree):

QEMU_OVERRIDE_SRCDIR = path/to/qemu/ # Sources of the QEMU VMM
KVMTOOL_OVERRIDE_SRCDIR = path/to/kvmtool/  # if you want to use kvmtool as VMM

Note that after modifying the QEMU VMM sources, it needs to be rebuilt explicitly through buildroot with make qemu-rebuild.

Build:

make qemu_aarch64_virt_defconfig
make menuconfig
  # While in menuconfig, enable/disable the following options:
  BR2_LINUX_KERNEL=n
  BR2_PACKAGE_KVMTOOL=y
  BR2_PACKAGE_QEMU=y
  BR2_PACKAGE_QEMU_SYSTEM=y
  BR2_PACKAGE_QEMU_BLOBS=n
  BR2_PACKAGE_QEMU_SLIRP=y
  BR2_PACKAGE_QEMU_CHOOSE_TARGETS=y
  BR2_PACKAGE_QEMU_TARGET_AARCH64=y
  BR2_TARGET_ROOTFS_EXT2_SIZE=256M
  
  # Generate an initrd for the guest
  BR2_TARGET_ROOTFS_CPIO=y
make

This creates the rootfs images in buildroot’s output/images/ when building in-tree, or images/ when building out of tree.

Guest disk image for edk2

To create a guest disk image that resembles more a Linux distribution, containing the grub2 bootloader and the kernel, have a look at buildroot’s configs/aarch64_efi_defconfig, which enables a few options to generate a disk with an EFI partition:

  BR2_PACKAGE_HOST_GENIMAGE=y
  BR2_PACKAGE_HOST_DOSFSTOOLS=y
  BR2_PACKAGE_HOST_MTOOLS=y
  BR2_TARGET_GRUB2=y
  BR2_TARGET_GRUB2_ARM64_EFI=y
  BR2_ROOTFS_POST_IMAGE_SCRIPT="board/aarch64-efi/post-image.sh support/scripts/genimage.sh"
  BR2_ROOTFS_POST_SCRIPT_ARGS="-c board/aarch64-efi/genimage-efi.cfg"

# Copy the guest kernel Image into buildroot's build directory, where it will be
# picked up by genimage.
mkdir buildroot/output/images/
cp linux/arch/arm64/boot/Image buildroot/output/images/Image

make

With these, after generating the root filesystem, buildroot packs it into another disk image images/disk.imgalong with an EFI FAT partition that contains grub and the kernel Image (layout is defined by board/aarch64-efi/genimage-efi.cfg).

QEMU system emulation

Repo: https://gitlab.com/qemu-project/qemu.git or the same repository as the VMM.

Build: do not build in the same source directory as the VMM! Since buildroot copies the whole content of that source directory, binary files will conflict (the VMM is cross-built while the system emulation QEMU is native). If you want to use the same source directory, do use a separate build directory as described here:

mkdir -p ../build/qemu/ # outside of the source directory
cd ../build/qemu/
../../qemu/configure --target-list=aarch64-softmmu
make -j

Running the system emulation

QEMU will connect to four TCP ports for the different consoles. Create the servers manually with socat -,rawer TCP-LISTEN:5432x (x = 0, 1, 2, 3) or use the script given at the end.

qemu-system-aarch64 -M virt,virtualization=on,secure=on,gic-version=3
        -M acpi=off -cpu max,x-rme=on -m 8G -smp 8
        -nographic
        -bios tf-a/flash.bin
        -kernel linux/arch/arm64/boot/Image
        -drive format=raw,if=none,file=buildroot/output/images/rootfs.ext4,id=hd0
        -device virtio-blk-pci,drive=hd0
        # The following parameters allow to use separate consoles for Firmware (port 54320),
        # Secure payload (54321), host (54322) and guest (54323).
        -nodefaults
        -serial tcp:localhost:54320
        -serial tcp:localhost:54321
        -chardev socket,mux=on,id=hvc0,port=54322,host=localhost
        -device virtio-serial-device
        -device virtconsole,chardev=hvc0
        -chardev socket,mux=on,id=hvc1,port=54323,host=localhost
        -device virtio-serial-device
        -device virtconsole,chardev=hvc1
        -append "root=/dev/vda console=hvc0"
        -device virtio-net-pci,netdev=net0 -netdev user,id=net0
        # This shares the current directory with the host, providing the files needed
        # to launch the guest.
        -device virtio-9p-device,fsdev=shr0,mount_tag=shr0
        -fsdev local,security_model=none,path=.,id=shr0

Crucially, the x-rme=on parameter enables the (experimental) FEAT_RME.

In the host kernel log, verify that KVM communicates with the RMM and is ready to launch Realm guests:

[    0.893261] kvm [1]: Using prototype RMM support (version 66.0)

Note: The base system (started above) is currently set to 8GB of RAM, which we believe is enough memory to demonstrate how CCA works in a simulated environment. Modifications to the trusted firmware and RMM elements are needed if a different value is selected.

Launching a Realm guest

Once at the host command line prompt simply use root to log in.

Mount the shared directory with:

mount -t 9p shr0 /mnt

Launching a Realm guest using QEMU

The following script uses the QEMU VMM to launch a Realm guest with KVM.

#!/bin/sh

USE_VIRTCONSOLE=true
USE_EDK2=false
USE_INITRD=true
DIRECT_KERNEL_BOOT=true
USE_OPTEE_BUILD=true
VM_MEMORY=512M

if $USE_OPTEE_BUILD; then
    KERNEL=/mnt/out/bin/Image
    INITRD=/mnt/out-br/images/rootfs.cpio
    EDK2=TODO
    DISK=TODO
else
    # Manual method:
    KERNEL=/mnt/linux/arch/arm64/boot/Image
    INITRD=/mnt/buildroot/output/images/rootfs.cpio
    EDK2=/mnt/edk2/Build/ArmVirtQemu-AARCH64/DEBUG_GCC5/FV/QEMU_EFI.fd
    DISK=/mnt/buildroot/output/images/disk.img
fi

add_qemu_arg () {
    QEMU_ARGS="$QEMU_ARGS $@"
}
add_kernel_arg () {
    KERNEL_ARGS="$KERNEL_ARGS $@"
}

add_qemu_arg -M virt,acpi=off,gic-version=3 -cpu host -enable-kvm
add_qemu_arg -smp 2 -m $VM_MEMORY -overcommit mem-lock=on
add_qemu_arg -M confidential-guest-support=rme0
add_qemu_arg -object rme-guest,id=rme0,measurement-algo=sha512,num-pmu-counters=6,sve-vector-length=256
add_qemu_arg -device virtio-net-pci,netdev=net0,romfile=""
add_qemu_arg -netdev user,id=net0

if $USE_VIRTCONSOLE; then
    add_kernel_arg console=hvc0
    add_qemu_arg -nodefaults
    add_qemu_arg -chardev stdio,mux=on,id=hvc0,signal=off
    add_qemu_arg -device virtio-serial-pci -device virtconsole,chardev=hvc0
else
    add_kernel_arg console=ttyAMA0 earlycon
    add_qemu_arg -nographic
fi

if $USE_EDK2; then
    add_qemu_arg -bios $EDK2
fi

if $DIRECT_KERNEL_BOOT; then
    add_qemu_arg -kernel $KERNEL
else
    $USE_INITRD && echo "Initrd requires direct kernel boot" && exit 1
fi

if $USE_INITRD; then
    add_qemu_arg -initrd $INITRD
else
    add_qemu_arg -device virtio-blk-pci,drive=rootfs0
    add_qemu_arg -drive format=raw,if=none,file="$DISK",id=rootfs0
    add_kernel_arg root=/dev/vda2
fi

$USE_EDK2 && $USE_VIRTCONSOLE && ! $USE_INITRD && \
    echo "Don't forget to add console=hvc0 to grub.cfg"

if $DIRECT_KERNEL_BOOT; then
    set -x
    qemu-system-aarch64 $QEMU_ARGS  \
        -append "$KERNEL_ARGS"      \
            </dev/hvc1 >/dev/hvc1
else
    set -x
    qemu-system-aarch64 $QEMU_ARGS  \
            </dev/hvc1 >/dev/hvc1
fi

The -M confidential-guest-support=rme0 and -object rme-guest,id=rme0,measurement-algo=sha512,num-pmu-counters=6,sve-vector-length=256 parameters declare this as a Realm VM and configure its parameters. Do note that the syntax will change as we aim to reuse existing QEMU parameters (notably SVE and PMU).

Save this as executable in the shared folder and in the host, launch it with:

/mnt/realm.sh

You should see RMM logs in the Firmware terminal:

# RMI (Realm Management Interface) is the protocol that host uses to
# communicate with the RMM
SMC_RMM_REC_CREATE            45659000 456ad000 446b1000 > RMI_SUCCESS
SMC_RMM_REALM_ACTIVATE        45659000 > RMI_SUCCESS

# RSI (Realm Service Interface) is the protocol that the guest uses to
# communicate with the RMM
SMC_RSI_ABI_VERSION           > d0000
SMC_RSI_REALM_CONFIG          41afe000 > RSI_SUCCESS
SMC_RSI_IPA_STATE_SET         40000000 60000000 1 0 > RSI_SUCCESS 60000000

Followed a few minutes later by the guest kernel starting in the Realm terminal.

Launching a Realm guest using the KVMTool

lkvm run --realm -c 2 -m 2G -k /mnt/out/bin/Image -d /mnt/out-br/images/rootfs.ext4 -p "console=hvc0 root=/dev/vda" < /dev/hvc1 > /dev/hvc1

Running edk2 as a guest

Enable USE_EDK2 to boot the Realm guest with the edk2 firmware. It can either load kernel and initrd through the FwCfg device provided by QEMU (DIRECT_KERNEL_BOOT=true), or launch a bootloader from a disk image (see grub2 above). When booting the kernel directly, edk2 measures the kernel, initrd and parameters provided on the QEMU command-line and adds them to the Realm Extended Measurement via RSI calls, so that they can be attested later.

Notes:

  • Disable USE_VIRTCONSOLE in order to see all boot logs. Doing this enables the emulated PL011 serial and is much slower. Although edk2 does support virtio-console, it doesn’t display the debug output there (but you’ll still see RMM logs showing progress during boot).

  • When booting via grub2, the kernel parameters are stored in grub.cfg which is copied from board/aarch64-efi/grub.cfg by the buildroot script board/aarch64-efi/post-image.sh. Bu default the kernel parameters do not define a console, so Linux will determine the boot console from the device tree’s /chosen/stdout-path property, which QEMU initializes to the default serial console. So if you want to boot with virtconsole, add console=hvc0 to board/aarch64-efi/grub.cfg before making buildroot.

Tips

Automate some things in the host boot

You can add files to the buildroot images by providing an overlay directory. The BR2_ROOTFS_OVERLAY option points to the directory that will be added into the image. For example I use:

├── etc
│   ├── init.d
│   │   └── S50-shr
│   └── inittab

S50-shr is an initscript that mounts the shared directory:

#!/bin/sh

case $1 in
    start)
        mkdir -p /mnt/shr0/
        mount -t 9p shr0 /mnt/shr0/
        ;;
    stop)
        umount /mnt/shr0/
        rmdir /mnt/shr0/
        ;;
esac

inittab is buildroot’s package/busybox/inittab, modified to automatically log into root (the respawn line). It could also mount the 9p filesystem.

# /etc/inittab
#
# Copyright (C) 2001 Erik Andersen <andersen@codepoet.org>
#
# Note: BusyBox init doesn't support runlevels.  The runlevels field is
# completely ignored by BusyBox init. If you want runlevels, use
# sysvinit.
#
# Format for each entry: <id>:<runlevels>:<action>:<process>
#
# id        == tty to run on, or empty for /dev/console
# runlevels == ignored
# action    == one of sysinit, respawn, askfirst, wait, and once
# process   == program to run

# Startup the system
::sysinit:/bin/mount -t proc proc /proc
::sysinit:/bin/mount -o remount,rw /
::sysinit:/bin/mkdir -p /dev/pts /dev/shm
::sysinit:/bin/mount -a
::sysinit:/bin/mount -t debugfs debugfs /sys/kernel/debug
::sysinit:/sbin/swapon -a
null::sysinit:/bin/ln -sf /proc/self/fd /dev/fd
null::sysinit:/bin/ln -sf /proc/self/fd/0 /dev/stdin
null::sysinit:/bin/ln -sf /proc/self/fd/1 /dev/stdout
null::sysinit:/bin/ln -sf /proc/self/fd/2 /dev/stderr
::sysinit:/bin/hostname -F /etc/hostname
# now run any rc scripts
::sysinit:/etc/init.d/rcS

# Put a getty on the serial port
#console::respawn:/sbin/getty -L  console 0 vt100 # GENERIC_SERIAL

::respawn:-/bin/sh

# Stuff to do for the 3-finger salute
#::ctrlaltdel:/sbin/reboot

# Stuff to do before rebooting
::shutdown:/etc/init.d/rcK
::shutdown:/sbin/swapoff -a
::shutdown:/bin/umount -a -r

Spawn four consoles with servers listening for QEMU system emulation:

This uses OP-TEE’s soc_term.py

#!/bin/bash

SOC_TERM=path/to/optee/soc_term.py

xterm -title "Firmware" -e bash -c "$SOC_TERM 54320" &
xterm -title "Secure" -e bash -c "$SOC_TERM 54321" &
xterm -title "host" -e bash -c "$SOC_TERM 54322" &
xterm -title "Realm" -e bash -c "$SOC_TERM 54323" &

while ! nc -z 127.0.0.1 54320 || ! nc -z 127.0.0.1 54321 || ! nc -z 127.0.0.1 54322 || ! nc -z 127.0.0.1 54323; do sleep 1; done
  • No labels