Building an RME stack for QEMU
The whole software stack for CCA is in development, meaning instructions will change frequently and repositories are temporary. Instructions to compile the stack, both manually and from the OP-TEE build environment, have been written from a Ubuntu 22.04 LTS based system.
Note: Even though this software stack uses OP-TEE build environment, it is not building and running OP-TEE itself.
- 1 Release Notes
- 1.1 With the OP-TEE build environment
- 1.1.1 Virt machine:
- 1.1.2 SBSA machine:
- 1.2 Manual build
- 1.2.1 Host and guest Linux
- 1.2.2 Guest edk2
- 1.2.3 QEMU and kvmtool VMM
- 1.2.4 Cloud-hypervisor
- 1.2.5 Root filesystem
- 1.2.5.1 Guest disk image for edk2
- 1.2.5.2 Build the Ubuntu Rootfs
- 1.2.6 QEMU system emulation
- 1.2.7 Running the system emulation
- 1.2.7.1 QEMU-virt startup script:
- 1.2.7.2 QEMU-sbsa startup script:
- 1.3 Launching a Realm guest
- 1.4 Attestation Proof of Concept
- 1.4.1 cca-workload-attestation
- 1.4.2 keybroker-demo
- 1.4.3 realm-measurements
- 1.5 Tips
- 1.1 With the OP-TEE build environment
Release Notes
The latest build is “cca/v7”
The name of the branch matches the Linux kernel mailing list’s CCA patchset the reference stack is built upon. Individual projects listed in the manifest may or may not have the same naming convention, or may reference older patchsets. This is normal and expected.
There is a lot of churn in the CCA reference stack. As such we encourage users to upgrade to the newest revision when available.
With the OP-TEE build environment
This method requires at least the following tools and libraries. The manual build described below also requires most of them.
python3-pyelftools, python3-venv
acpica-tools
openssl (debian libssl-dev)
libglib2.0-dev, libpixman-1-dev
dtc (debian device-tree-compiler)
flex, bison
make, cmake, ninja (debian ninja-build), curl, rsync
The easiest way to build and run a complete stack is through OP-TEE. We support two system emulation QEMU machines, i.e Virt and SBSA. The amount of system RAM supported by QEMU-virt is set to at most 8GB and can not be increased. QEMU-sbsa also uses 8GB by default but can be configured between 2GB and 1TB.
The following commands will download all components and build them, in about thirty minutes on a fast machine. To avoid dealing with dependencies you can also build the latest stack in a container environment and run it in a tmux session with this set of scripts.
Virt machine:
mkdir cca
cd cca
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee-4.2.0/manifest.git -b cca/v7 -m qemu_v8_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 toolchains
make -j8
SBSA machine:
mkdir cca
cd cca
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee-4.2.0/manifest.git -b cca/v7 -m sbsa_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 toolchains
make -j8
Note:
If the build fails, try without -j. It will point out missing dependencies.
Add
CLOUDHV=y
to build cloud-hypervisor. This requires a rust toolchain >= 1.77.
Images can be found under cca/out/
and cca/out-br/
. The following command launches system emulation QEMU with the RME feature enabled, running TF-A, RMM and the Linux host.
make run-only
This should launch 4 new terminals, i.e Firmware, Host, Secure and Realm. Output from the boot process will start flowing in the Firmware terminal followed by the Host terminal. The build environment automatically makes the cca
directory available to the host VM via 9p.
Read on for the details of the software stack, or skip to the following section to boot a Realm guest.
Manual build
The following sections detail how to build and run all components of the CCA software stack. Two QEMU binaries are built. The system emulation QEMU implements a complete machine, emulating Armv9 CPUs with FEAT_RME and four security states: Root, Secure, Non-secure and Realm. The VMM (Virtual Machine Manager) QEMU is cross-built by buildroot, and launches the realm guest from Non-secure EL0.
| REALM | NON-SECURE |
-------+--------------+--------------+
EL0 | Guest Rootfs | Host Rootfs |
| | QEMU VMM |
-------+--------------+--------------+
EL1 | EDK2 | |
| Linux Guest | |
| | EDK2 |
-------+--------------+ Linux Host |
EL2 | TF-RMM | (KVM) |
| | |
-------+--------------+--------------+
(ROOT)| |
EL3 | TF-A |
-------+-----------------------------+
HW | QEMU |
-------+-----------------------------+
Instructions to build the TF-RMM, TF-A and host EDK2 differ based on the QEMU machine selected for system emulation. All other components of the stack are common to both machines.
Manual build instructions for TF-RMM, TF-A and host EDK2 on QEMU-virt
Manual build instructions for TF-RMM, TF-A and host EDK2 on QEMU-sbsa
Host and guest Linux
Both host and guest need extra patches.
Status: Guest support is in Linux v6.13. Host support on the list
Repo: https://gitlab.arm.com/linux-arm/linux-cca cca-host/v7
Build:
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 defconfig
# Enable the configfs-tsm driver that provides the attestation interface
scripts/config -e VIRT_DRIVERS -e ARM_CCA_GUEST -e CONFIG_HZ_100 \
-d CONFIG_HZ_250 -e CONFIG_MACVLAN -e CONFIG_MACVTAP
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 -j8 Image
Guest edk2
The QEMU VMM can either launch the guest kernel itself, or launch edk2 which launches the kernel or an intermediate bootloader. That latter method is generally used to boot a Linux distribution. Edk2 needs modifications in order to run as a Realm guest.
Status: in development. Only the ArmVirtQemu firwmare supports booting in a Realm at the moment, not ArmVirtQemuKernel.
Repo: https://git.codelinaro.org/linaro/dcap/edk2 branch cca/latest
Build:
git submodule update --init --recursive
source edksetup.sh
make -j -C BaseTools
export GCC5_AARCH64_PREFIX=aarch64-linux-gnu-
build -b RELEASE -a AARCH64 -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc
Note that the RELEASE build is a lot faster than the DEBUG build, but doesn’t provide a lot of information. If the boot doesn’t work, try building with -b DEBUG
.
Direct kernel boot may fail while the EFI stub loads the initrd from edk2, because the memory is already fragmented at that point and edk2 may not find a contiguous region large enough to transfer the initrd to the kernel. If initrd loading fails, try increasing guest memory.
QEMU and kvmtool VMM
Both kvmtool and QEMU can be used to launch Realm guests.
Status: in development
Repo: https://git.codelinaro.org/linaro/dcap/qemu branch cca/latest
https://git.codelinaro.org/linaro/dcap/kvmtool branch cca/log
Build:
The buildroot recipe below already includes kvmtool and QEMU for CCA, so there is no need to download them separately. For development you can use your own source directory: add a local.mk file at the root of the buildroot build directory, containing:
QEMU_CCA_OVERRIDE_SRCDIR = path/to/qemu
KVMTOOL_CCA_OVERRIDE_SRCDIR = path/to/kvmtool
You will need to run make qemu-cca-rebuild
in the buildroot build directory after making changes to the QEMU source.
Cloud-hypervisor
Status: in development.
Repo: GitHub - jpbrucker/cloud-hypervisor at cca/latest
Build:
# Install the aarch64 target if necessary
rustup target add aarch64-unknown-linux-gnu
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
cargo build --target=aarch64-unknown-linux-gnu --features=arm_rme
Then copy target/aarch64-unknown-linux-gnu/debug/cloud-hypervisor
into the Root filesystem or the shared folder.
Root filesystem
Buildroot provides a convenient way to build lightweight root filesystems.
Repo:
buildroot / buildroot · GitLab
Use the master branch to have up-to-date recipes for building QEMU.https://git.codelinaro.org/linaro/dcap/buildroot-external-cca.git contains additional packages used for demonstrating attestation and useful scripts. It also provides qemu-cca and kvmtool-cca packages that replace buildroot’s qemu and kvmtool, in order to use the latest work-in-progress VMM support for CCA.
Build:
git clone https://git.codelinaro.org/linaro/dcap/buildroot-external-cca.git
git clone https://gitlab.com/buildroot.org/buildroot.git
cd buildroot
make BR2_EXTERNAL=path/to/buildroot-external-cca/ cca_defconfig
make -j16
cp output/images/rootfs.ext4 ../images/
cp output/images/rootfs.cpio ../images/
This creates the rootfs images in buildroot’s output/images/
when building in-tree, or images/
when building out of tree.
Guest disk image for edk2
To create a guest disk image that resembles more a Linux distribution, containing the grub2 bootloader and the kernel, have a look at buildroot’s configs/aarch64_efi_defconfig
, which enables a few options to generate a disk with an EFI partition:
BR2_PACKAGE_HOST_GENIMAGE=y
BR2_PACKAGE_HOST_DOSFSTOOLS=y
BR2_PACKAGE_HOST_MTOOLS=y
BR2_TARGET_GRUB2=y
BR2_TARGET_GRUB2_ARM64_EFI=y
BR2_ROOTFS_POST_IMAGE_SCRIPT="board/aarch64-efi/post-image.sh support/scripts/genimage.sh"
BR2_ROOTFS_POST_SCRIPT_ARGS="-c board/aarch64-efi/genimage-efi.cfg"
# Copy the guest kernel Image into buildroot's build directory, where it will be
# picked up by genimage.
mkdir buildroot/output/images/
cp linux/arch/arm64/boot/Image buildroot/output/images/Image
make aarch64_efi_defconfig
make
With these, after generating the root filesystem, buildroot packs it into another disk image under output/images/disk.img
along with an EFI FAT partition that contains grub and the kernel Image (layout is defined by board/aarch64-efi/genimage-efi.cfg
). File output/images/disk.img
should be copied to a location accessible by the RME enabled base system. See variable RUN_DISK
in file /usr/share/cca-realm-measurements/gen-run-vmm.cfg
on the base root filesystem.
Build the Ubuntu Rootfs
Below there is a scripts to automatically build the ubuntu 22.04 rootfs, it consists of several parts:
Generate a 4G image, make partitions and mount.
Download the ubuntu-base filesystem and abstract the files to the image
Configure the essential files for ubuntu for installing packages.
Set up the
init
process for Ubuntu Rootfs boot.Install essential packages via Chroot, in case of failing enabling the /dev/hvc0 when realm boot.
Umount
NOTE: Please copy this content below to a ubuntu_fs.sh and run it with sudo
, as the mount, chroot needs the root permissions.
sudo ./ubuntu_fs.sh ubuntu22.img
The above command will generate a ubuntu 22 rootfs with the name ubuntu22.img
. It is very easy to launch, just change the disk image to ubuntu22.img and you can enjoy. It can be either running as the Realm Host, or the Realm itself for daily development. Just tweak the scripts below to suit your user cases.
#!/bin/bash
img_name="$1"
directory="ubuntu_fs"
realm_user="realm"
realm_password="realm"
if [[ $EUID -ne 0 ]]; then
echo "Need to run with sudo"
exit 1
fi
# Check if the img file name parameter is provided
if [ -z "$img_name" ]; then
echo "Please provide the generated img file name as the first parameter!"
exit 1
fi
#Generate a 4GB img file
echo "Generating 4GB img file $img_name ..."
dd if=/dev/zero of="$img_name" bs=1G count=4 || exit 1
# Format the img file as ext4 file system
echo "Formatting the img file as ext4 file system..."
mkfs.ext4 -F "$img_name" || exit 1
# Check if the directory exists
if [ -d "$directory" ]; then
echo "Directory $directory exists. Deleting its contents..."
rm -rf "$directory"/*
else
echo "Directory $directory does not exist. Creating the directory..."
mkdir "$directory"
fi
# Mount the img file to the ubuntu_fs directory
echo "Mounting the img file to directory $directory..."
mount -o loop "$img_name" "$directory" || exit 1
# Download the file
echo "Downloading file $archive_file ..."
archive_url="https://cdimage.ubuntu.com/ubuntu-base/releases/22.04.4/release/ubuntu-base-22.04.4-base-arm64.tar.gz"
archive_file="ubuntu-base-22.04.4-base-arm64.tar.gz"
wget "$archive_url" -P "$directory"
# Extract the file
echo "Extracting file $archive_file to directory $directory..."
tar -xf "$directory/$archive_file" -C "$directory"
# Remove the downloaded archive file
echo "Removing downloaded archive file $archive_file ..."
rm "$directory/$archive_file"
# Write nameserver to resolv.conf file
echo "Writing nameserver to $directory/etc/resolv.conf file..."
echo "nameserver 8.8.8.8" > "$directory/etc/resolv.conf"
cat > "$directory/etc/apt/sources.list" << EOF
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy main restricted
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-updates main restricted
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy universe
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-updates universe
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy multiverse
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-updates multiverse
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-backports main restricted universe multiverse
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-security main restricted
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-security universe
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-security multiverse
EOF
# Switch to chroot environment and execute apt command
echo "Switching to chroot environment and executing apt command..."
mount -t proc /proc $directory/proc
mount -t sysfs /sys $directory/sys
mount -o bind /dev $directory/dev
mount -o bind /dev/pts $directory/dev/pts
chroot "$directory" /bin/bash -c "apt update -y" || exit 1
# Create a new user with sudo privileges
echo "Creating user $realm_user with sudo privileges..."
chroot "$directory" /bin/bash -c "useradd -m -s /bin/bash -G sudo $realm_user" || exit 1
# Set the password for the new user
echo "Setting password for user $realm_user..."
echo "$realm_user:$realm_password" | chroot "$directory" /bin/bash -c "chpasswd" || exit 1
chroot "$directory" /bin/bash -c "chmod 1777 /tmp" || exit 1
echo "Generate the init file"
cat > "$directory/init" << EOF
#!/bin/sh
[ -d /dev ] || mkdir -m 0755 /dev
[ -d /root ] || mkdir -m 0700 /root
[ -d /sys ] || mkdir /sys
[ -d /proc ] || mkdir /proc
[ -d /tmp ] || mkdir /tmp
mkdir -p /var/lock
mount -t sysfs -o nodev,noexec,nosuid sysfs /sys
mount -t proc -o nodev,noexec,nosuid proc /proc
# Some things don't work properly without /etc/mtab.
ln -sf /proc/mounts /etc/mtab
grep -q '\<quiet\>' /proc/cmdline || echo "Loading, please wait..."
# Note that this only becomes /dev on the real filesystem if udev's scripts
# are used; which they will be, but it's worth pointing out
if ! mount -t devtmpfs -o mode=0755 udev /dev; then
echo "W: devtmpfs not available, falling back to tmpfs for /dev"
mount -t tmpfs -o mode=0755 udev /dev
[ -e /dev/console ] || mknod -m 0600 /dev/console c 5 1
[ -e /dev/null ] || mknod /dev/null c 1 3
fi
mkdir /dev/pts
mount -t devpts -o noexec,nosuid,gid=5,mode=0620 devpts /dev/pts || true
mount -t tmpfs -o "noexec,nosuid,size=10%,mode=0755" tmpfs /run
mkdir /run/initramfs
# compatibility symlink for the pre-oneiric locations
ln -s /run/initramfs /dev/.initramfs
# Set modprobe env
export MODPROBE_OPTIONS="-qb"
# mdadm needs hostname to be set. This has to be done before the udev rules are called!
if [ -f "/etc/hostname" ]; then
/bin/hostname -b -F /etc/hostname 2>&1 1>/dev/null
fi
exec /sbin/init
EOF
chmod +x $directory/init || exit 1
chroot "$directory" /bin/bash -c "apt install systemd iptables -y" || exit 1
chroot "$directory" /bin/bash -c "ln -s /lib/systemd/systemd /sbin/init" || exit 1
echo "Install other essential components, in case of booting blocking at /dev/hvc0 failed to bring up"
chroot "$directory" /bin/bash -c "apt install vim bash-completion net-tools iputils-ping ifupdown ethtool ssh rsync udev htop rsyslog curl openssh-server apt-utils dialog nfs-common psmisc language-pack-en-base sudo kmod apt-transport-https -y" || exit 1
# Unmount the mounted directory
echo "Unmounting the mounted directory $directory ..."
umount $directory/proc
umount $directory/sys
umount $directory/dev/pts
umount $directory/dev
umount "$directory"
echo "Operation completed!"
QEMU system emulation
Repo: QEMU / QEMU · GitLab or the same repository as the VMM.
Build: do not build in the same source directory as the VMM! Since buildroot copies the whole content of that source directory, binary files will conflict (the VMM is cross-built while the system emulation QEMU is native).
git clone https://gitlab.com/qemu-project/qemu.git
cd qemu
./configure --target-list=aarch64-softmmu --enable-slirp --disable-docs
make -j16
If you want to use the same source directory as the target QEMU, QEMU integrated in buildroot, do use a separate build directory as described here:
mkdir -p ../build/qemu/ # outside of the source directory
cd ../build/qemu/
../../qemu/configure --target-list=aarch64-softmmu --enable-slirp --disable-docs
make -j16
Running the system emulation
QEMU will connect to four TCP ports for the different consoles. Create the servers manually with socat -,rawer TCP-LISTEN:5432x
(x = 0, 1, 2, 3) or use the script given at the end in section “Spawn four consoles with servers listening for QEMU system emulation”:
QEMU-virt startup script:
qemu-system-aarch64 -M virt,virtualization=on,secure=on,gic-version=3
-M acpi=off -cpu max,x-rme=on -m 8G -smp 8
-nographic
-bios trusted-firmware-a/flash.bin
-kernel linux-cca/arch/arm64/boot/Image
-drive format=raw,if=none,file=buildroot/output/images/rootfs.ext4,id=hd0
-device virtio-blk-pci,drive=hd0
# The following parameters allow to use separate consoles for Firmware (port 54320),
# Secure payload (54321), host (54322) and guest (54323).
-nodefaults
-serial tcp:localhost:54320
-serial tcp:localhost:54321
-chardev socket,mux=on,id=hvc0,port=54322,host=localhost
-device virtio-serial-device
-device virtconsole,chardev=hvc0
-chardev socket,mux=on,id=hvc1,port=54323,host=localhost
-device virtio-serial-device
-device virtconsole,chardev=hvc1
-append "root=/dev/vda console=hvc0"
-device virtio-net-pci,netdev=net0 -netdev user,id=net0
# This shares the current directory with the host, providing the files needed
# to launch the guest.
-device virtio-9p-device,fsdev=shr0,mount_tag=shr0
-fsdev local,security_model=none,path=.,id=shr0
QEMU-sbsa startup script:
qemu-system-aarch64 \
-machine sbsa-ref -m 8G \
-cpu max,x-rme=on,sme=off,pauth-impdef=on \
-drive file=images/SBSA_FLASH0.fd,format=raw,if=pflash \
-drive file=images/SBSA_FLASH1.fd,format=raw,if=pflash \
-drive file=fat:rw:images/disks/virtual,format=raw \
-drive format=raw,if=none,file=buildroot/output/images/rootfs.ext4,id=hd0 \
-device virtio-blk-pci,drive=hd0 \
-serial tcp:localhost:54320 \
-serial tcp:localhost:54321 \
-chardev socket,mux=on,id=hvc0,port=54322,host=localhost \
-device virtio-serial-pci \
-device virtconsole,chardev=hvc0 \
-chardev socket,mux=on,id=hvc1,port=54323,host=localhost \
-device virtio-serial-pci \
-device virtconsole,chardev=hvc1 \
-device virtio-9p-pci,fsdev=shr0,mount_tag=shr0 \
-fsdev local,security_model=none,path=../../,id=shr0
Crucially, the x-rme=on
parameter enables the (experimental) FEAT_RME.
In the host kernel log, verify that KVM communicates with the RMM and is ready to launch Realm guests:
[ 0.893261] kvm [1]: Using prototype RMM support (version 66.0)
Note: The virt platform currently has at most 8GB of RAM, which we believe is enough memory to demonstrate how CCA works in a simulated environment. Modifications to the trusted firmware and RMM elements are needed if a different value is selected.
Launching a Realm guest
Once at the host command line prompt simply use root
to log in.
Using buildroot-external-cca, the shared directory should be automatically mounted. Otherwise mount it with with:
mount -t 9p shr0 /mnt
Launching a Realm guest using QEMU
When using buildroot-external-cca, the filesystem should contain a script provided by cca-realm-measurements. Launch the VM with:
gen-run-vmm.sh --tap --extcon
Using a tap network puts the guest on the same network as the host, and allows running the attestation demo below.
Or manually (with user networking rather than tap):
qemu-system-aarch64 \
-M confidential-guest-support=rme0 \
-object rme-guest,id=rme0,measurement-algorithm=sha512 \
-nodefaults \
-chardev stdio,mux=on,id=virtiocon0,signal=off \
-device virtio-serial-pci \
-device virtconsole,chardev=virtiocon0 \
-mon chardev=virtiocon0,mode=readline \
-kernel /mnt/out/bin/Image \
-initrd /mnt/out-br/images/rootfs.cpio \
-device virtio-net-pci,netdev=net0,romfile= \
-netdev user,id=net0 \
-cpu host -M virt -enable-kvm -M gic-version=3,its=on \
-smp 2 -m 512M -nographic \
-append console=hvc0 < /dev/hvc1 >/dev/hvc1
The -M confidential-guest-support=rme0
and -object rme-guest,id=rme0
parameters declare this as a Realm VM.
You should see RMM logs in the Firmware terminal:
# RMI (Realm Management Interface) is the protocol that host uses to
# communicate with the RMM
SMC_RMM_REC_CREATE 45659000 456ad000 446b1000 > RMI_SUCCESS
SMC_RMM_REALM_ACTIVATE 45659000 > RMI_SUCCESS
# RSI (Realm Service Interface) is the protocol that the guest uses to
# communicate with the RMM
SMC_RSI_ABI_VERSION > d0000
SMC_RSI_REALM_CONFIG 41afe000 > RSI_SUCCESS
SMC_RSI_IPA_STATE_SET 40000000 60000000 1 0 > RSI_SUCCESS 60000000
Followed a few minutes later by the guest kernel starting in the Realm terminal.
Launching a Realm guest using Kvmtool
gen-run-vmm.sh --kvmtool --tap --extcon
or manually:
lkvm run --realm -c 2 -m 2G -k /mnt/out/bin/Image -d /mnt/out-br/images/rootfs.ext4 --restricted_mem -p "console=hvc0 root=/dev/vda" < /dev/hvc1 > /dev/hvc1
Launching a Realm guest using cloud-hypervisor
This example uses a macvtap interface to connect the guest to the host network. CONFIG_MACVTAP needs to be 'y' in the host kernel config.
gen-run-vmm.sh --cloudhv --extcon
or manually:
ip link add link eth0 name macvtap0 type macvtap
ip link set macvtap0 up
tapindex=$(cat /sys/class/net/macvtap0/ifindex)
tapaddress=$(cat /sys/class/net/macvtap0/address)
tapdevice="/dev/tap$tapindex"
/mnt/out/bin/cloud-hypervisor --platform arm_rme=on --kernel /mnt/out/bin/Image --disk path=/mnt/out-br/images/rootfs.ext4 --cpus boot=2 --memory size=512M --net fd=3,mac=$tapaddress --cmdline "console=hvc0 root=/dev/vda" < /dev/hvc1 > /dev/hvc1 3<>$tapdevice
Running edk2 as a guest
Enable USE_EDK2 to boot the Realm guest with the edk2 firmware. It can either load kernel and initrd through the FwCfg device provided by QEMU, or launch a bootloader from a disk image (gen-run-vmm.sh --edk2 --disk-boot
). See section Guest disk image for edk2 for details on how to generate the disk image expected by --disk-boot
.
When booting the kernel directly, edk2 measures the kernel, initrd and parameters provided on the QEMU command-line and adds them to the Realm Extended Measurement via RSI calls, so that they can be attested later.
Notes:
When booting via grub2, the kernel parameters are stored in
grub.cfg
which is copied fromboard/aarch64-efi/grub.cfg
by the buildroot scriptboard/aarch64-efi/post-image.sh
. By default the kernel parameters do not define aconsole
, so Linux will determine the boot console from the device tree’s/chosen/stdout-path
property, which QEMU initializes to the default serial console. So if you want to boot with virtconsole, addconsole=hvc0
to$(BUILDROOT)board/aarch64-efi/grub.cfg
before making buildroot.
Attestation Proof of Concept
Two demonstration applications are available in the root file system.
cca-workload-attestation
From a Realm VM, you can query the RMM for a CCA attestation token that is printed to the console and saved to a file:
cca-workload-attestation report
The tool can also demonstrate a typical interaction with an attestation service by communicating the CCA attestation token to an instance of the Veraison services:
cca-workload-attestation passport
Details on the cca-workload-attestation, the Veraison services and the endorser that populate the endorsement values can be found here.
keybroker-demo
The keybroker demo shows interaction between an attester, a relying party and a verifier. The keybroker-app, running in the Realm, requests a secret key from keybroker-server, run by the relying party.
Repo: GitHub - veraison/keybroker-demo: A simple key broker protocol
Build: the keybroker server
cd rust-keybroker
cargo build
Run:
The server, on the build machine, listens on port 8088, and connects to the Linaro veraison instance for platform token verification. You can change these settings with command-line options.
target/debug/keybroker-server -e http://10.0.2.2 -v
The client, in the Realm, requests secret key “skywalker”:
keybroker-app skywalker -e http://10.0.2.2:8088 -v
You will normally get the following error:
INFO Attestation failure :-( ! AttestationFailure: No attestation result was obtained. No known-good reference values.
This means that the platform token verification succeeded, but the realm token verification did not. The Realm Initial Measurement is not known by the server:
INFO Known-good RIM values are missing. If you trust the client that submitted
evidence for challenge 2961561008, you should restart the keybroker-server with the following
command-line option to populate it with known-good RIM values:
--reference-values <(echo '{ "reference-values": [ "WE/l0rYWLJykPTBwM92hM39VFpy9OdqvBHsQpjXfTEzXZGTf+D5n2Fqqhb4Zi/L3TMbYH9opnzjQL2EDa8TXUg==" ] }')
Retrying the key request after restarting the server with the suggested parameter should now succeed:
INFO Attestation success :-) ! The key returned from the keybroker is 'May the force be with you.'
realm-measurements
Instead of running the server twice, you can also predict the Realm Initial Measurement, even before running the guest, using the cca-realm-measurements tool.
Build:
cargo build
# Create a local configuration:
cat << EOF > gen-run-vmm.cfg
REALM_MEASUREMENTS=target/debug/realm-measurements
CONFIGS_DIR=configs
# The root of the optee build directory
BUILD=path/to/cca
KERNEL=\$BUILD/out/bin/Image
INITRD=\$BUILD/out-br/images/rootfs.cpio
EDK2_DIR=\$BUILD/edk2
EOF
Run:
scripts/gen-run-vmm.sh --gen-measurements
# outputs:
+ target/debug/realm-measurements -c configs/qemu-max-8.2.conf -c configs/kvm.conf -k /build/cca/out/bin/Image -i /build/cca/out-br/images/rootfs.cpio -f /Build/ArmVirtQemu-AARCH64/DEBUG_GCC5/FV/QEMU_EFI.fd --print-b64 qemu -M confidential-guest-support=rme0 -object rme-guest,id=rme0,measurement-algorithm=sha512 -nodefaults -chardev stdio,mux=on,id=virtiocon0,signal=off -device virtio-serial-pci -device virtconsole,chardev=virtiocon0 -mon chardev=virtiocon0,mode=readline -kernel /build/cca/out/bin/Image -initrd /build/cca/out-br/images/rootfs.cpio -device virtio-net-pci,netdev=net0,romfile= -netdev user,id=net0 -cpu host -M virt -enable-kvm -M gic-version=3,its=on -smp 2 -m 512M -nographic -dtb qemu-gen.dtb -append console=hvc0
RIM: eZ25AQETulL4kIvgJi/yCVi12ASeD1MOvIwSq9gwNT7HYPAjWePnMcLfH4OaHEafW6V4MnZfgWxgr5uYQnG26Q==
Since this is the same script that you use to launch the VM, it already knows the QEMU command-line arguments that you use, and the generated DTB loaded into the VM. Since you also give it the kernel and initrd images that will be loaded into the Realm, it will correctly predict the resulting Realm Initial Measurement (RIM). You can of course run the realm-measurements tool manually if you use a different VMM command-line.
Pass the resulting RIM to keybroker-server:
target/debug/keybroker-server -e http://10.0.2.2 -v --reference-values <(echo '{ "reference-values": [ "eZ25AQETulL4kIvgJi/yCVi12ASeD1MOvIwSq9gwNT7HYPAjWePnMcLfH4OaHEafW6V4MnZfgWxgr5uYQnG26Q==" ] }')
When running your own veraison instance (see the documentation and poc-endorser), you can also provision the verifier with reference values directly, so cca-workload-attestation passport
can obtain an affirming appraisal of the realm token:
scripts/gen-run-vmm.sh --corim-output realm-corim.cbor
veraison -- cocli corim submit --corim-file realm-corim.cbor --media-type 'application/corim-unsigned+cbor; profile="http://arm.com/cca/realm/1"'
And in the guest:
# veraison.example points to the machine running the veraison instance (build machine)
$ cat /etc/hosts
10.0.2.2 veraison.example
$ cca-workload-attestation passport
{
"ear.verifier-id": {
"build": "N/A",
"developer": "Veraison Project"
},
"eat_nonce": "zM2RxZM5agCVJs2EyaWLmtDEM5qq7jj0xcXmLsdi56Bz8SnYSbBwQtpzdHUUMv5WWg5d8zFypap_oz5HzySknQ==",
"eat_profile": "tag:github.com,2023:veraison/ear",
"iat": 1733322149,
"submods": {
"CCA_REALM": {
"ear.appraisal-policy-id": "policy:ARM_CCA",
"ear.status": "affirming",
"ear.trustworthiness-vector": {
"configuration": 0,
"executables": 2,
"file-system": 0,
"hardware": 0,
"instance-identity": 2,
"runtime-opaque": 0,
"sourced-data": 0,
"storage-opaque": 0
},
"ear.veraison.annotated-evidence": {
"cca-realm-challenge": "zM2RxZM5agCVJs2EyaWLmtDEM5qq7jj0xcXmLsdi56Bz8SnYSbBwQtpzdHUUMv5WWg5d8zFypap/oz5HzySknQ==",
"cca-realm-extensible-measurements": [
"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="
],
"cca-realm-hash-algo-id": "sha-512",
"cca-realm-initial-measurement": "pHeSIWwfShg9v8l38mqDGHloBvWCtBg7eJ9/zKG5hnjfQs2ACqZk/+sx1/f1A2h7TuKxoSPNhq4p5vkrKl1lBg==",
"cca-realm-personalization-value": "q80AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==",
"cca-realm-profile": "tag:arm.com,2023:realm#1.0.0",
"cca-realm-public-key": "pAECIAIhWDB2+YgJG+WF7UGAGuz6uFhUjGMFfhaw5nYSC70NL5wp4FbF1BoBMOucIVF4mdwjFGsiWDAo4bBivT6ksxX9IZ8cu1KMtudMpJvhZ3NzT2GhymEDGyu/PZGPL5T/xCKOUJGVRK4=",
"cca-realm-public-key-hash-algo-id": "sha-256"
}
},
"CCA_SSD_PLATFORM": {
"ear.appraisal-policy-id": "policy:ARM_CCA",
"ear.status": "affirming",
...
Tips
Automate some things in the host boot
You can add files to the buildroot images by providing an overlay directory. The BR2_ROOTFS_OVERLAY
option points to the directory that will be added into the image. For example I use:
├── etc
│ ├── init.d
│ │ └── S50-shr
│ └── inittab
S50-shr is an initscript that mounts the shared directory:
#!/bin/sh
case $1 in
start)
mkdir -p /mnt/shr0/
mount -t 9p shr0 /mnt/shr0/
;;
stop)
umount /mnt/shr0/
rmdir /mnt/shr0/
;;
esac
inittab is buildroot’s package/busybox/inittab
, modified to automatically log into root (the respawn line). It could also mount the 9p filesystem.
# /etc/inittab
#
# Copyright (C) 2001 Erik Andersen <andersen@codepoet.org>
#
# Note: BusyBox init doesn't support runlevels. The runlevels field is
# completely ignored by BusyBox init. If you want runlevels, use
# sysvinit.
#
# Format for each entry: <id>:<runlevels>:<action>:<process>
#
# id == tty to run on, or empty for /dev/console
# runlevels == ignored
# action == one of sysinit, respawn, askfirst, wait, and once
# process == program to run
# Startup the system
::sysinit:/bin/mount -t proc proc /proc
::sysinit:/bin/mount -o remount,rw /
::sysinit:/bin/mkdir -p /dev/pts /dev/shm
::sysinit:/bin/mount -a
::sysinit:/bin/mount -t debugfs debugfs /sys/kernel/debug
::sysinit:/sbin/swapon -a
null::sysinit:/bin/ln -sf /proc/self/fd /dev/fd
null::sysinit:/bin/ln -sf /proc/self/fd/0 /dev/stdin
null::sysinit:/bin/ln -sf /proc/self/fd/1 /dev/stdout
null::sysinit:/bin/ln -sf /proc/self/fd/2 /dev/stderr
::sysinit:/bin/hostname -F /etc/hostname
# now run any rc scripts
::sysinit:/etc/init.d/rcS
# Put a getty on the serial port
#console::respawn:/sbin/getty -L console 0 vt100 # GENERIC_SERIAL
::respawn:-/bin/sh
# Stuff to do for the 3-finger salute
#::ctrlaltdel:/sbin/reboot
# Stuff to do before rebooting
::shutdown:/etc/init.d/rcK
::shutdown:/sbin/swapoff -a
::shutdown:/bin/umount -a -r
Spawn four consoles with servers listening for QEMU system emulation:
This uses OP-TEE’s soc_term.py
#!/bin/bash
SOC_TERM=path/to/optee/soc_term.py
xterm -title "Firmware" -e bash -c "$SOC_TERM 54320" &
xterm -title "Secure" -e bash -c "$SOC_TERM 54321" &
xterm -title "host" -e bash -c "$SOC_TERM 54322" &
xterm -title "Realm" -e bash -c "$SOC_TERM 54323" &
while ! nc -z 127.0.0.1 54320 || ! nc -z 127.0.0.1 54321 || ! nc -z 127.0.0.1 54322 || ! nc -z 127.0.0.1 54323; do sleep 1; done