Building an RME stack for QEMU
The whole software stack for CCA is in development, meaning instructions will change frequently and repositories are temporary. Instructions to compile the stack, both manually and from the OP-TEE build environment, have been written from a Ubuntu 22.04 LTS based system.
With the OP-TEE build environment
This method requires at least the following tools and libraries. The manual build described below also requires most of them.
python3-pyelftools, python3-venv
acpica-tools
openssl (debian libssl-dev)
libglib2.0-dev, libpixman-1-dev
dtc (debian device-tree-compiler)
flex, bison
make, cmake, ninja (debian ninja-build), curl, rsync
The easiest way to build and run a complete stack is through OP-TEE. We support two system emulation QEMU machines, i.e Virt and SBSA. The amount of system RAM supported by QEMU-virt is set 8GB and can not be modified. QEMU-sbsa is also set to 8GB by default but can be configure between 2GB and 1TB.
The following commands will download all components and build them, in about thirty minutes on a fast machine.
Virt machine:
mkdir cca-v3
cd cca-v3
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee-4.2.0/manifest.git -b cca/v3 -m qemu_v8_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 toolchains
make -j8
SBSA machine:
mkdir cca-v3
cd cca-v3
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee-4.2.0/manifest.git -b cca/v3 -m sbsa_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 toolchains
make -j8
Note:
If the build fails, try without -j. It will point out missing dependencies.
Add
CLOUDHV=y
to build cloud-hypervisor. This requires a rust toolchain >= 1.77.We have recently updated our build environent from OP-TEE 3.22.0 OP-TEE 4.2.0. A full reclone of the project is needed to avoid problems.
Images can be found in under cca-v3/out/
and cca-v3/out-br/
. The following command launches system emulation QEMU with the RME feature enabled, running TF-A, RMM and the Linux host.
make run-only
This should launch 4 new terminals, i.e Firmware, Host, Secure and Realm. Output from the boot process will start flowing in the Firmware terminal followed by the Host terminal. The build environment automatically makes the cca-v3
directory available to the host VM via 9p.
Read on for the details of the software stack, or skip to the following section to boot a Realm guest.
Manual build
The following sections detail how to build and run all components of the CCA software stack. Two QEMU binaries are built. The system emulation QEMU implements a complete machine, emulating Armv9 CPUs with FEAT_RME and four security states: Root, Secure, Non-secure and Realm. The VMM (Virtual Machine Manager) QEMU is cross-built by buildroot, and launches the realm guest from Non-secure EL0.
Instructions to build the TF-RMM, TF-A and host EDK2 differ based on the QEMU machine selected for system emulation. All other components of the stack are common to both machines.
Manual build instructions for TF-RMM, TF-A and host EDK2 on QEMU-virt
Manual build instructions for TF-RMM, TF-A and host EDK2 on QEMU-sbsa
Host and guest Linux
Both host and guest need extra patches.
Status: https://lore.kernel.org/linux-arm-kernel/20231002124311.204614-1-suzuki.poulose@arm.com/
Repo: https://gitlab.arm.com/linux-arm/linux-cca cca-full/v3
Build:
Guest edk2
The QEMU VMM can either launch the guest kernel itself, or launch edk2 which launches the kernel or an intermediate bootloader. That latter method is generally used to boot a Linux distribution. Edk2 needs modifications in order to run as a Realm guest.
Status: in development. Only the ArmVirtQemu firwmare supports booting in a Realm at the moment, not ArmVirtQemuKernel.
Repo: https://git.codelinaro.org/linaro/dcap/edk2 branch cca/v3
Build:
Note that the DEBUG build is very verbose (even with a few patches that remove repetitive messages), which is extremely slow in a nesting environment with emulated UART. Change it to -b RELEASE
to speed up the guest boot.
QEMU VMM
Both kvmtool and QEMU can be used to launch Realm guests. For details about kvmtool, see the cover letter for the Linux support above.
Status: in development
Repo: for now https://git.codelinaro.org/linaro/dcap/qemu branch cca/v3
Build:
Cloud-hypervisor
Status: in development.
Repo: for now https://git.codelinaro.org/linaro/dcap/cloud-hypervisor branch cca/v3
Build:
Then copy target/aarch64-unknown-linux-gnu/debug/cloud-hypervisor
into the Root filesystem or the shared folder.
Root filesystem
Buildroot provides a convenient way to build lightweight root filesystems. It can also embed the VMM into the rootfs if you specify the path to kvmtool or QEMU source in a local.mk file in the build directory.
Repo: https://gitlab.com/buildroot.org/buildroot.git
Use the master branch to have up-to-date recipes for building QEMU.
Create local.mk (at the root of the source directory, or in the build directory when building out of tree):
Note that after modifying the QEMU VMM sources, it needs to be rebuilt explicitly through buildroot with make qemu-rebuild
.
Build:
This creates the rootfs images in buildroot’s output/images/
when building in-tree, or images/
when building out of tree.
Guest disk image for edk2
To create a guest disk image that resembles more a Linux distribution, containing the grub2 bootloader and the kernel, have a look at buildroot’s configs/aarch64_efi_defconfig
, which enables a few options to generate a disk with an EFI partition:
With these, after generating the root filesystem, buildroot packs it into another disk image images/disk.img
along with an EFI FAT partition that contains grub and the kernel Image (layout is defined by board/aarch64-efi/genimage-efi.cfg
).
Build the Ubuntu Rootfs
Below there is a scripts to automatically build the ubuntu 22.04 rootfs, it generally concludes several parts:
Generate a 4G image, make partitions and mount.
Download the ubuntu-base filesystem and abstract the files to the image
Configure the essential files for ubuntu for installing packages.
Set up the
init
process for Ubuntu Rootfs boot.Install essential packages via Chroot, in case of failing enabling the /dev/hvc0 when realm boot.
Umount
NOTE: Please copy this content below to a ubuntu_fs.sh and run it with sudo
, as the mount, chroot needs the root permissions.
The above command will generate a ubuntu 22 rootfs with the name ubuntu22.img
. It is very easy to launch, just change the disk image to ubuntu22.img and you can enjoy. It can be either running as the Realm Host, or the Realm itself for daily development. Just tweak the scripts below to suit your user cases.
QEMU system emulation
Repo: https://gitlab.com/qemu-project/qemu.git or the same repository as the VMM.
Build: do not build in the same source directory as the VMM! Since buildroot copies the whole content of that source directory, binary files will conflict (the VMM is cross-built while the system emulation QEMU is native). If you want to use the same source directory, do use a separate build directory as described here:
Running the system emulation
QEMU will connect to four TCP ports for the different consoles. Create the servers manually with socat -,rawer TCP-LISTEN:5432x
(x = 0, 1, 2, 3) or use the script given at the end.
QEMU-virt startup script:
QEMU-sbsa startup script:
Crucially, the x-rme=on
parameter enables the (experimental) FEAT_RME.
In the host kernel log, verify that KVM communicates with the RMM and is ready to launch Realm guests:
Note: The base system (started above) is currently set to 8GB of RAM, which we believe is enough memory to demonstrate how CCA works in a simulated environment. Modifications to the trusted firmware and RMM elements are needed if a different value is selected.
Launching a Realm guest
Once at the host command line prompt simply use root
to log in.
Mount the shared directory with:
Launching a Realm guest using QEMU
The following script uses the QEMU VMM to launch a Realm guest with KVM.
The -M confidential-guest-support=rme0
and -object rme-guest,id=rme0,measurement-algo=sha512
parameters declare this as a Realm VM and configure its parameters.
Save this as executable in the shared folder and in the host, launch it with:
You should see RMM logs in the Firmware terminal:
Followed a few minutes later by the guest kernel starting in the Realm terminal.
Launching a Realm guest using the KVMTool
Launching a Realm guest using cloud-hypervisor
This example uses a macvtap interface to connect the guest to the host network. CONFIG_MACVTAP needs to be 'y' in the host kernel config.
Running edk2 as a guest
Enable USE_EDK2 to boot the Realm guest with the edk2 firmware. It can either load kernel and initrd through the FwCfg device provided by QEMU (DIRECT_KERNEL_BOOT=true), or launch a bootloader from a disk image (see grub2 above). When booting the kernel directly, edk2 measures the kernel, initrd and parameters provided on the QEMU command-line and adds them to the Realm Extended Measurement via RSI calls, so that they can be attested later.
Notes:
Disable USE_VIRTCONSOLE in order to see all boot logs. Doing this enables the emulated PL011 serial and is much slower. Although edk2 does support virtio-console, it doesn’t display the debug output there (but you’ll still see RMM logs showing progress during boot).
When booting via grub2, the kernel parameters are stored in
grub.cfg
which is copied fromboard/aarch64-efi/grub.cfg
by the buildroot scriptboard/aarch64-efi/post-image.sh
. Bu default the kernel parameters do not define aconsole
, so Linux will determine the boot console from the device tree’s/chosen/stdout-path
property, which QEMU initializes to the default serial console. So if you want to boot with virtconsole, addconsole=hvc0
toboard/aarch64-efi/grub.cfg
before making buildroot.
Attestation Proof of Concept
A demonstration application called cca-workload-attestation has been integrated to the root file system. From a Realm VM, it provides users with the capability to query the RMM for a CCA attestation token that can either be printed to the console or saved to a file. It also demonstrates a typical interaction with an attestation service by communicating the CCA attestation token to a local instance of the Veraison services. Details on the cca-workload-attestation, the Veraison services and the endorser that populate the endorsement values can be found here.
Tips
Automate some things in the host boot
You can add files to the buildroot images by providing an overlay directory. The BR2_ROOTFS_OVERLAY
option points to the directory that will be added into the image. For example I use:
S50-shr is an initscript that mounts the shared directory:
inittab is buildroot’s package/busybox/inittab
, modified to automatically log into root (the respawn line). It could also mount the 9p filesystem.
Spawn four consoles with servers listening for QEMU system emulation:
This uses OP-TEE’s soc_term.py