Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The whole software stack for CCA is in development, meaning instructions will change frequently and repositories are temporary. Instructions to compile the stack, both manually and from the OP-TEE build environment, have been written from a Ubuntu 22.04 LTS based system.

Table of Contents

With the OP-TEE build environment

This method requires at least the following tools and libraries. The manual build described below also requires most of them.

  • repo

  • python3-pyelftools, python3-venv

  • acpica-tools

  • openssl (debian libssl-dev)

  • libglib2.0-dev, libpixman-1-dev

  • dtc (debian device-tree-compiler)

  • flex, bison

  • make, cmake, ninja (debian ninja-build), curl, rsync

The easiest way to build and run a complete stack is through OP-TEE. We support two system emulation QEMU machines, i.e Virt and SBSA. The amount of system RAM supported by QEMU-virt is set 8GB and can not be modified. QEMU-sbsa is also set to 8GB by default but can be configure between 2GB and 1TB.

The following commands will download all components and build them, in about thirty minutes on a fast machine.

Virt machine:

Code Block
mkdir v1.0cca-eac5v3
cd v1.0cca-eac5v3
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee-4.2.0/manifest.git -b v1.0-eac5
 cca/v3 -m qemu_v8_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 CCA_SUPPORT=y toolchains
make -j8 CCA_SUPPORT=y

Note: if the build fails, try without -j. It will point out missing dependencies and work around a possible issue with the edk2 build.

Images can be found in under v1.0-eac5/out/ and v1.0-eac5/out-br/. The following command launches system emulation QEMU with the RME feature enabled, running TF-A, RMM and the Linux host.

Code Block
make CCA_SUPPORT=y run-only

This should launch 4 new terminals, i.e Firmware, Host, Secure and Realm. Output from the boot process will start flowing in the Firmware terminal followed by the Host terminal. The build environment automatically makes the v1.0-eac5 directory available to the host VM via 9p.

Read on for the details of the software stack, or skip to the following section to boot a Realm guest.

Manual build

The following sections detail how to build and run all components of the CCA software stack. Two QEMU binaries are built. The system emulation QEMU implements a complete machine, emulating Armv9 CPUs with FEAT_RME and four security states: Root, Secure, Non-secure and Realm. The VMM (Virtual Machine Manager) QEMU is cross-built by buildroot, and launches the realm guest from Non-secure EL0.

Code Block
       |    REALM     |  NON-SECURE  |
-------+--------------+--------------+
  EL0  | Guest Rootfs |  Host Rootfs |
       |              |  QEMU VMM    |
-------+--------------+--------------+
  EL1  |        EDK2  |              |
       | Linux Guest  |              |
       |              |  EDK2        |
-------+--------------+  Linux Host  |
  EL2  |      TF-RMM  |    (KVM)     |
       |              |              |
-------+--------------+--------------+
 (ROOT)|                             |
  EL3  |            TF-A             |
-------+-----------------------------+
  HW   |            QEMU             |
-------+-----------------------------+

TF-RMM

The Realm Management Monitor (RMM) connects KVM and the Realm guest.

RMM gets loaded into NS DRAM (because there isn't enough space in Secure RAM). TF-A carves out 24MB of memory for the RMM (0x40100000-0x418fffff on the virt platform), and tells other software about it using a device-tree reserved memory node.

Status: QEMU support has been merged. Additional patches are needed until QEMU supports a couple features that are mandatory for RME (PMUv3p7 and ECV).

Repo: extra patches are at https://git.codelinaro.org/linaro/dcap/rmm branch rmm-v1.0-eac5
official repo is https://git.trustedfirmware.org/TF-RMM/tf-rmm.git/

Build:

Code Block
git submodule update --init --recursive
export CROSS_COMPILE=aarch64-none-elf-
cmake -DCMAKE_BUILD_TYPE=Debug -DRMM_CONFIG=qemu_virt_defcfg -B build-qemu
cmake --build build-qemu

Host EDK2

Edk2 is the firmware used in non-secure world. It works out of the box. However, we rely on edk2 not allocating memory from the DRAM area reserved for the RMM at the moment, which is fragile. Future work will add support for the reserved memory node provided by TF-A in the device-tree.

Repo: https://github.com/tianocore/edk2.git or the same repo and branch as Guest edk2 below.

Build:

Code Block
git submodule update --init --recursive
source edksetup.sh
make -j -C BaseTools
export GCC5_AARCH64_PREFIX=aarch64-linux-gnu-
build -b RELEASE -a AARCH64 -t GCC5 -p ArmVirtPkg/ArmVirtQemuKernel.dsc

TF-A

TF-A loads the RMM as well as the Non-secure firmware, and bridges RMM and KVM. It also owns the Granule Protection Table (GPT).

Status: QEMU support is currently under review.

Repo: currently at https://git.codelinaro.org/linaro/dcap/tf-a/trusted-firmware-a branch rmm-v1.0-eac5
official is https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/

Build:

Code Block
# Embed the RMM image and edk2 into the Final Image Package (FIP)
make -j CROSS_COMPILE=aarch64-linux-gnu- PLAT=qemu ENABLE_RME=1 DEBUG=1 LOG_LEVEL=40 \
    QEMU_USE_GIC_DRIVER=QEMU_GICV3 RMM=../rmm/build-qemu/Debug/rmm.img \
    BL33=../edk2/Build/ArmVirtQemuKernel-AARCH64/RELEASE_GCC5/FV/QEMU_EFI.fd all fip
# Pack whole image into flash.bin
dd if=build/qemu/debug/bl1.bin of=flash.bin
dd if=build/qemu/debug/fip.bin of=flash.bin seek=64 bs=4096

Host and guest Linux

Both host and guest need extra patches.

Status: https://lore.kernel.org/linux-arm-kernel/20231002124311.204614-1-suzuki.poulose@arm.com/

Repo: https://gitlab.arm.com/linux-arm/linux-cca cca-full/rmm-v1.0-eac5

Build:

Code Block
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 defconfig
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 -j8

Guest edk2

The QEMU VMM can either launch the guest kernel itself, or launch edk2 which launches the kernel or an intermediate bootloader. That latter method is generally used to boot a Linux distribution. Edk2 needs modifications in order to run as a Realm guest.

Status: in development. Only the ArmVirtQemu firwmare supports booting in a Realm at the moment, not ArmVirtQemuKernel.

Repo: https://git.codelinaro.org/linaro/dcap/edk2 branch rmm-v1.0-eac5

Build:

Code Block
git submodule update --init --recursive
source edksetup.sh
make -j -C BaseTools
export GCC5_AARCH64_PREFIX=aarch64-linux-gnu-
build -b DEBUG -a AARCH64 -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc

Note that the DEBUG build is very verbose (even with a few patches that remove repetitive messages), which is extremely slow in a nesting environment with emulated UART. Change it to -b RELEASE to speed up the guest boot.

QEMU VMM

Both kvmtool and QEMU can be used to launch Realm guests. For details about kvmtool, see the cover letter for the Linux support above.

Status: in development

Repo: for now https://git.codelinaro.org/linaro/dcap/qemu branch rmm-v1.0-eac5

Build:

Code Block
# Although it is buildroot that builds the VMM from this source directory,
# the following is needed to first download all the submodules
./configure --target-list=aarch64-softmmu

Root filesystem

Buildroot provides a convenient way to build lightweight root filesystems. It can also embed the VMM into the rootfs if you specify the path to kvmtool or QEMU source in a local.mk file in the build directory.

Repo: https://gitlab.com/buildroot.org/buildroot.git
Use the master branch to have up-to-date recipes for building QEMU.

Create local.mk (at the root of the source directory, or in the build directory when building out of tree):

Code Block
QEMU_OVERRIDE_SRCDIR = path/to/qemu/ # Sources of the QEMU VMM
KVMTOOL_OVERRIDE_SRCDIR = path/to/kvmtool/  # if you want to use kvmtool as VMM

Note that after modifying the QEMU VMM sources, it needs to be rebuilt explicitly through buildroot with make qemu-rebuild.

Build:

Code Block
make qemu_aarch64_virt_defconfig
make menuconfig
  # While in menuconfig, enable/disable the following options:
  BR2_LINUX_KERNEL=n
  BR2_PACKAGE_KVMTOOL=y
  BR2_PACKAGE_QEMU=y
  BR2_PACKAGE_QEMU_SYSTEM=y
  BR2_PACKAGE_QEMU_BLOBS=n
  BR2_PACKAGE_QEMU_SLIRP=y
  BR2_PACKAGE_QEMU_CHOOSE_TARGETS=y
  BR2_PACKAGE_QEMU_TARGET_AARCH64=y
  BR2_TARGET_ROOTFS_EXT2_SIZE=256M
  
  # Generate an initrd for the guest
  BR2_TARGET_ROOTFS_CPIO=y
make

This creates the rootfs images in buildroot’s output/images/ when building in-tree, or images/ when building out of tree.

Guest disk image for edk2

To create a guest disk image that resembles more a Linux distribution, containing the grub2 bootloader and the kernel, have a look at buildroot’s configs/aarch64_efi_defconfig, which enables a few options to generate a disk with an EFI partition:

Code Block
  BR2_PACKAGE_HOST_GENIMAGE=y
  BR2_PACKAGE_HOST_DOSFSTOOLS=y
  BR2_PACKAGE_HOST_MTOOLS=y
  BR2_TARGET_GRUB2=y
  BR2_TARGET_GRUB2_ARM64_EFI=y
  BR2_ROOTFS_POST_IMAGE_SCRIPT="board/aarch64-efi/post-image.sh support/scripts/genimage.sh"
  BR2_ROOTFS_POST_SCRIPT_ARGS="-c board/aarch64-efi/genimage-efi.cfg"

# Copy the guest kernel Image into buildroot's build directory, where it will be
# picked up by genimage.
mkdir buildroot/output/images/
cp linux/arch/arm64/boot/Image buildroot/output/images/Image

make

With these, after generating the root filesystem, buildroot packs it into another disk image images/disk.imgalong with an EFI FAT partition that contains grub and the kernel Image (layout is defined by board/aarch64-efi/genimage-efi.cfg).

QEMU system emulation

Repo: https://gitlab.com/qemu-project/qemu.git or the same repository as the VMM.

Build: do not build in the same source directory as the VMM! Since buildroot copies the whole content of that source directory, binary files will conflict (the VMM is cross-built while the system emulation QEMU is native). If you want to use the same source directory, do use a separate build directory as described here:

Code Block
mkdir -p ../build/qemu/ # outside of the source directory
cd ../build/qemu/
../../qemu/configure --target-list=aarch64-softmmu
make -j

Running the system emulation

QEMU will connect to four TCP ports for the different consoles. Create the servers manually with socat -,rawer TCP-LISTEN:5432x (x = 0, 1, 2, 3) or use the script given at the end.

Code Block
qemu-system-aarch64 -M virt,virtualization=on,secure=on,gic-version=3
        -M acpi=off -cpu max,x-rme=on -m 8G -smp 8
        -nographic
        -bios tf-a/flash.bin
        -kernel linux/arch/arm64/boot/Image

SBSA machine:

Code Block
mkdir cca-v3
cd cca-v3
repo init -u https://git.codelinaro.org/linaro/dcap/op-tee-4.2.0/manifest.git -b cca/v3 -m sbsa_cca.xml
repo sync -j8 --no-clone-bundle
cd build
make -j8 toolchains
make -j8

Note:

  • If the build fails, try without -j. It will point out missing dependencies.

  • Add CLOUDHV=y to build cloud-hypervisor. This requires a rust toolchain >= 1.77.

  • We have recently updated our build environent from OP-TEE 3.22.0 OP-TEE 4.2.0. A full reclone of the project is needed to avoid problems.

Images can be found in under cca-v3/out/ and cca-v3/out-br/. The following command launches system emulation QEMU with the RME feature enabled, running TF-A, RMM and the Linux host.

Code Block
make run-only

This should launch 4 new terminals, i.e Firmware, Host, Secure and Realm. Output from the boot process will start flowing in the Firmware terminal followed by the Host terminal. The build environment automatically makes the cca-v3 directory available to the host VM via 9p.

Read on for the details of the software stack, or skip to the following section to boot a Realm guest.

Manual build

The following sections detail how to build and run all components of the CCA software stack. Two QEMU binaries are built. The system emulation QEMU implements a complete machine, emulating Armv9 CPUs with FEAT_RME and four security states: Root, Secure, Non-secure and Realm. The VMM (Virtual Machine Manager) QEMU is cross-built by buildroot, and launches the realm guest from Non-secure EL0.

Code Block
       |    REALM     |  NON-SECURE  |
-------+--------------+--------------+
  EL0  | Guest Rootfs |  Host Rootfs |
       |              |  QEMU VMM    |
-------+--------------+--------------+
  EL1  |        EDK2  |              |
       | Linux Guest  |              |
       |              |  EDK2        |
-------+--------------+  Linux Host  |
  EL2  |      TF-RMM  |    (KVM)     |
       |              |              |
-------+--------------+--------------+
 (ROOT)|                             |
  EL3  |            TF-A             |
-------+-----------------------------+
  HW   |            QEMU             |
-------+-----------------------------+

Instructions to build the TF-RMM, TF-A and host EDK2 differ based on the QEMU machine selected for system emulation. All other components of the stack are common to both machines.

Manual build instructions for TF-RMM, TF-A and host EDK2 on QEMU-virt

Manual build instructions for TF-RMM, TF-A and host EDK2 on QEMU-sbsa

Host and guest Linux

Both host and guest need extra patches.

Status: https://lore.kernel.org/linux-arm-kernel/20231002124311.204614-1-suzuki.poulose@arm.com/

Repo: https://gitlab.arm.com/linux-arm/linux-cca cca-full/v3

Build:

Code Block
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 defconfig
# Enable the configfs-tsm driver that provides the attestation interface
scripts/config -e VIRT_DRIVERS -e ARM_CCA_GUEST
make CROSS_COMPILE=aarch64-linux-gnu- ARCH=arm64 -j8

Guest edk2

The QEMU VMM can either launch the guest kernel itself, or launch edk2 which launches the kernel or an intermediate bootloader. That latter method is generally used to boot a Linux distribution. Edk2 needs modifications in order to run as a Realm guest.

Status: in development. Only the ArmVirtQemu firwmare supports booting in a Realm at the moment, not ArmVirtQemuKernel.

Repo: https://git.codelinaro.org/linaro/dcap/edk2 branch cca/v3

Build:

Code Block
git submodule update --init --recursive
source edksetup.sh
make -j -C BaseTools
export GCC5_AARCH64_PREFIX=aarch64-linux-gnu-
build -b DEBUG -a AARCH64 -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc

Note that the DEBUG build is very verbose (even with a few patches that remove repetitive messages), which is extremely slow in a nesting environment with emulated UART. Change it to -b RELEASE to speed up the guest boot.

QEMU VMM

Both kvmtool and QEMU can be used to launch Realm guests. For details about kvmtool, see the cover letter for the Linux support above.

Status: in development

Repo: for now https://git.codelinaro.org/linaro/dcap/qemu branch cca/v3

Build:

Code Block
# Although it is buildroot that builds the VMM from this source directory,
# the following is needed to first download all the submodules
./configure --target-list=aarch64-softmmu
make -j

Cloud-hypervisor

Status: in development.

Repo: for now https://git.codelinaro.org/linaro/dcap/cloud-hypervisor branch cca/v3

Build:

Code Block
# Install the aarch64 target if necessary
rustup target add aarch64-unknown-linux-gnu

export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
cargo build --target=aarch64-unknown-linux-gnu --features=arm_rme

Then copy target/aarch64-unknown-linux-gnu/debug/cloud-hypervisor into the Root filesystem or the shared folder.

Root filesystem

Buildroot provides a convenient way to build lightweight root filesystems. It can also embed the VMM into the rootfs if you specify the path to kvmtool or QEMU source in a local.mk file in the build directory.

Repo: https://gitlab.com/buildroot.org/buildroot.git
Use the master branch to have up-to-date recipes for building QEMU.

Create local.mk (at the root of the source directory, or in the build directory when building out of tree):

Code Block
QEMU_OVERRIDE_SRCDIR = path/to/qemu/ # Sources of the QEMU VMM
KVMTOOL_OVERRIDE_SRCDIR = path/to/kvmtool/  # if you want to use kvmtool as VMM

Note that after modifying the QEMU VMM sources, it needs to be rebuilt explicitly through buildroot with make qemu-rebuild.

Build:

Code Block
make qemu_aarch64_virt_defconfig
make menuconfig
  # While in menuconfig, enable/disable the following options:
  BR2_LINUX_KERNEL=n
  BR2_PACKAGE_KVMTOOL=y
  BR2_PACKAGE_QEMU=y
  BR2_PACKAGE_QEMU_SYSTEM=y
  BR2_PACKAGE_QEMU_BLOBS=n
  BR2_PACKAGE_QEMU_SLIRP=y
  BR2_PACKAGE_QEMU_CHOOSE_TARGETS=y
  BR2_PACKAGE_QEMU_TARGET_AARCH64=y
  BR2_TARGET_ROOTFS_EXT2_SIZE=256M
  
  # Generate an initrd for the guest
  BR2_TARGET_ROOTFS_CPIO=y
make

This creates the rootfs images in buildroot’s output/images/ when building in-tree, or images/ when building out of tree.

Guest disk image for edk2

To create a guest disk image that resembles more a Linux distribution, containing the grub2 bootloader and the kernel, have a look at buildroot’s configs/aarch64_efi_defconfig, which enables a few options to generate a disk with an EFI partition:

Code Block
  BR2_PACKAGE_HOST_GENIMAGE=y
  BR2_PACKAGE_HOST_DOSFSTOOLS=y
  BR2_PACKAGE_HOST_MTOOLS=y
  BR2_TARGET_GRUB2=y
  BR2_TARGET_GRUB2_ARM64_EFI=y
  BR2_ROOTFS_POST_IMAGE_SCRIPT="board/aarch64-efi/post-image.sh support/scripts/genimage.sh"
  BR2_ROOTFS_POST_SCRIPT_ARGS="-c board/aarch64-efi/genimage-efi.cfg"

# Copy the guest kernel Image into buildroot's build directory, where it will be
# picked up by genimage.
mkdir buildroot/output/images/
cp linux/arch/arm64/boot/Image buildroot/output/images/Image

make

With these, after generating the root filesystem, buildroot packs it into another disk image images/disk.imgalong with an EFI FAT partition that contains grub and the kernel Image (layout is defined by board/aarch64-efi/genimage-efi.cfg).

Build the Ubuntu Rootfs

Below there is a scripts to automatically build the ubuntu 22.04 rootfs, it generally concludes several parts:

  • Generate a 4G image, make partitions and mount.

  • Download the ubuntu-base filesystem and abstract the files to the image

  • Configure the essential files for ubuntu for installing packages.

  • Set up the init process for Ubuntu Rootfs boot.

  • Install essential packages via Chroot, in case of failing enabling the /dev/hvc0 when realm boot.

  • Umount

NOTE: Please copy this content below to a ubuntu_fs.sh and run it with sudo, as the mount, chroot needs the root permissions.

Code Block
sudo ./ubuntu_fs.sh ubuntu22.img

The above command will generate a ubuntu 22 rootfs with the name ubuntu22.img. It is very easy to launch, just change the disk image to ubuntu22.img and you can enjoy. It can be either running as the Realm Host, or the Realm itself for daily development. Just tweak the scripts below to suit your user cases.

Code Block
#!/bin/bash

img_name="$1"
directory="ubuntu_fs"
realm_user="realm"
realm_password="realm"

if [[ $EUID -ne 0 ]]; then
   echo "Need to run with sudo"
   exit 1
fi

# Check if the img file name parameter is provided
if [ -z "$img_name" ]; then
    echo "Please provide the generated img file name as the first parameter!"
    exit 1
fi

#Generate a 4GB img file
echo "Generating 4GB img file $img_name ..."
dd if=/dev/zero of="$img_name" bs=1G count=4 || exit 1

# Format the img file as ext4 file system
echo "Formatting the img file as ext4 file system..."
mkfs.ext4 -F "$img_name" || exit 1

# Check if the directory exists
if [ -d "$directory" ]; then
    echo "Directory $directory exists. Deleting its contents..."
    rm -rf "$directory"/*
else
    echo "Directory $directory does not exist. Creating the directory..."
    mkdir "$directory"
fi

# Mount the img file to the ubuntu_fs directory
echo "Mounting the img file to directory $directory..."
mount -o loop "$img_name" "$directory" || exit 1

# Download the file
echo "Downloading file $archive_file ..."
archive_url="https://cdimage.ubuntu.com/ubuntu-base/releases/22.04.4/release/ubuntu-base-22.04.4-base-arm64.tar.gz"
archive_file="ubuntu-base-22.04.4-base-arm64.tar.gz"
wget "$archive_url" -P "$directory"

# Extract the file
echo "Extracting file $archive_file to directory $directory..."
tar -xf "$directory/$archive_file" -C "$directory"

# Remove the downloaded archive file
echo "Removing downloaded archive file $archive_file ..."
rm "$directory/$archive_file"

# Write nameserver to resolv.conf file
echo "Writing nameserver to $directory/etc/resolv.conf file..."
echo "nameserver 8.8.8.8" > "$directory/etc/resolv.conf"

cat > "$directory/etc/apt/sources.list" << EOF
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy main restricted
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-updates main restricted
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy universe
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-updates universe
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy multiverse
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-updates multiverse
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-backports main restricted universe multiverse
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-security main restricted
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-security universe
deb http://nova.clouds.ports.ubuntu.com/ubuntu-ports/ jammy-security multiverse
EOF

# Switch to chroot environment and execute apt command
echo "Switching to chroot environment and executing apt command..."
mount -t proc /proc $directory/proc
mount -t sysfs /sys $directory/sys
mount -o bind /dev $directory/dev
mount -o bind /dev/pts $directory/dev/pts
chroot "$directory" /bin/bash -c "apt update -y" || exit 1

# Create a new user with sudo privileges
echo "Creating user $realm_user with sudo privileges..."
chroot "$directory" /bin/bash -c "useradd -m -s /bin/bash -G sudo $realm_user" || exit 1

# Set the password for the new user
echo "Setting password for user $realm_user..."
echo "$realm_user:$realm_password" | chroot "$directory" /bin/bash -c "chpasswd" || exit 1

chroot "$directory" /bin/bash -c "chmod 1777 /tmp" || exit 1

echo "Generate the init file"
cat > "$directory/init" << EOF
#!/bin/sh

[ -d /dev ] || mkdir -m 0755 /dev
[ -d /root ] || mkdir -m 0700 /root
[ -d /sys ] || mkdir /sys
[ -d /proc ] || mkdir /proc
[ -d /tmp ] || mkdir /tmp
mkdir -p /var/lock
mount -t sysfs -o nodev,noexec,nosuid sysfs /sys
mount -t proc -o nodev,noexec,nosuid proc /proc
# Some things don't work properly without /etc/mtab.
ln -sf /proc/mounts /etc/mtab

grep -q '\<quiet\>' /proc/cmdline || echo "Loading, please wait..."

# Note that this only becomes /dev on the real filesystem if udev's scripts
# are used; which they will be, but it's worth pointing out
if ! mount -t devtmpfs -o mode=0755 udev /dev; then
        echo "W: devtmpfs not available, falling back to tmpfs for /dev"
        mount -t tmpfs -o mode=0755 udev /dev
        [ -e /dev/console ] || mknod -m 0600 /dev/console c 5 1
        [ -e /dev/null ] || mknod /dev/null c 1 3
fi
mkdir /dev/pts
mount -t devpts -o noexec,nosuid,gid=5,mode=0620 devpts /dev/pts || true
mount -t tmpfs -o "noexec,nosuid,size=10%,mode=0755" tmpfs /run
mkdir /run/initramfs
# compatibility symlink for the pre-oneiric locations
ln -s /run/initramfs /dev/.initramfs 

# Set modprobe env
export MODPROBE_OPTIONS="-qb"

# mdadm needs hostname to be set. This has to be done before the udev rules are called!
if [ -f "/etc/hostname" ]; then
        /bin/hostname -b -F /etc/hostname 2>&1 1>/dev/null
fi

exec /sbin/init
EOF
chmod +x $directory/init || exit 1

chroot "$directory" /bin/bash -c "apt install systemd iptables -y" || exit 1
chroot "$directory" /bin/bash -c "ln -s /lib/systemd/systemd /sbin/init" || exit 1

echo "Install other essential components, in case of booting blocking at /dev/hvc0 failed to bring up"
chroot "$directory" /bin/bash -c "apt install vim bash-completion net-tools iputils-ping ifupdown ethtool ssh rsync udev htop rsyslog curl openssh-server apt-utils dialog nfs-common psmisc language-pack-en-base sudo kmod apt-transport-https -y" || exit 1
# Unmount the mounted directory
echo "Unmounting the mounted directory $directory ..."
umount $directory/proc
umount $directory/sys
umount $directory/dev/pts
umount $directory/dev
umount "$directory"

echo "Operation completed!"

QEMU system emulation

Repo: https://gitlab.com/qemu-project/qemu.git or the same repository as the VMM.

Build: do not build in the same source directory as the VMM! Since buildroot copies the whole content of that source directory, binary files will conflict (the VMM is cross-built while the system emulation QEMU is native). If you want to use the same source directory, do use a separate build directory as described here:

Code Block
mkdir -p ../build/qemu/ # outside of the source directory
cd ../build/qemu/
../../qemu/configure --target-list=aarch64-softmmu
make -j

Running the system emulation

QEMU will connect to four TCP ports for the different consoles. Create the servers manually with socat -,rawer TCP-LISTEN:5432x (x = 0, 1, 2, 3) or use the script given at the end.

QEMU-virt startup script:

Code Block
qemu-system-aarch64 -M virt,virtualization=on,secure=on,gic-version=3
        -M acpi=off -cpu max,x-rme=on -m 8G -smp 8
        -nographic
        -bios trusted-firmware-a/flash.bin
        -kernel linux-cca/arch/arm64/boot/Image
        -drive format=raw,if=none,file=buildroot/output/images/rootfs.ext4,id=hd0
        -device virtio-blk-pci,drive=hd0
        # The following parameters allow to use separate consoles for Firmware (port 54320),
        # Secure payload (54321), host (54322) and guest (54323).
        -nodefaults
        -serial tcp:localhost:54320
        -serial tcp:localhost:54321
        -chardev socket,mux=on,id=hvc0,port=54322,host=localhost
        -device virtio-serial-device
        -device virtconsole,chardev=hvc0
        -chardev socket,mux=on,id=hvc1,port=54323,host=localhost
        -device virtio-serial-device
        -device virtconsole,chardev=hvc1
        -append "root=/dev/vda console=hvc0"
        -device virtio-net-pci,netdev=net0 -netdev user,id=net0
        # This shares the current directory with the host, providing the files needed
        # to launch the guest.
        -device virtio-9p-device,fsdev=shr0,mount_tag=shr0
        -fsdev local,security_model=none,path=.,id=shr0

QEMU-sbsa startup script:

Code Block
qemu-system-aarch64 \
         -machine sbsa-ref -m 8G \
         -cpu max,x-rme=on,sme=off \
         -drive file=images/SBSA_FLASH0.fd,format=raw,if=pflash \
         -drive file=images/SBSA_FLASH1.fd,format=raw,if=pflash \
         -drive file=fat:rw:images/disks/virtual,format=raw \
         -drive format=raw,if=none,file=buildroot/output/images/rootfs.ext4,id=hd0
        -device virtio-blk-pci,drive=hd0
        # The following parameters allow to use separate consoles for Firmware (port 54320),
        # Secure payload (54321), host (54322) and guest (54323). \
         -device -nodefaultsvirtio-blk-pci,drive=hd0 \
         -serial tcp:localhost:54320 \
         -serial tcp:localhost:54321 \
         -chardev socket,mux=on,id=hvc0,port=54322,host=localhost         -device virtio-serial-device
        -device virtconsole,chardev=hvc0
        -chardev socket,mux=on,id=hvc1,port=54323,host=localhost\
         -device virtio-serial-devicepci \
         -device virtconsole,chardev=hvc1hvc0 \
       -append "root=/dev/vda console=hvc0"
        -device virtio-net-pci,netdev=net0 -netdev user,id=net0
-chardev socket,mux=on,id=hvc1,port=54323,host=localhost \
       # This shares the current directory with the host, providing the files needed-device virtio-serial-pci \
        # to launch the guest.-device virtconsole,chardev=hvc1 \
         -device virtio-9p-devicepci,fsdev=shr0,mount_tag=shr0 \
         -fsdev local,security_model=none,path=../../,id=shr0

Crucially, the x-rme=on parameter enables the (experimental) FEAT_RME.

...

Code Block
languagebash
#!/bin/sh

USE_VIRTCONSOLE=true
USE_EDK2=false
USE_INITRD=true
DIRECT_KERNEL_BOOT=true
USE_OPTEE_BUILD=true
VM_MEMORY=512M

if $USE_OPTEE_BUILD; then
    KERNEL=/mnt/out/bin/Image
    INITRD=/mnt/out-br/images/rootfs.cpio
    EDK2=TODO
    DISK=TODO
else
    # Manual method:
    KERNEL=/mnt/linux-cca/arch/arm64/boot/Image
    INITRD=/mnt/buildroot/output/images/rootfs.cpio
    EDK2=/mnt/edk2/Build/ArmVirtQemu-AARCH64/DEBUG_GCC5/FV/QEMU_EFI.fd
    DISK=/mnt/buildroot/output/images/disk.img
fi

add_qemu_arg () {
    QEMU_ARGS="$QEMU_ARGS $@"
}
add_kernel_arg () {
    KERNEL_ARGS="$KERNEL_ARGS $@"
}

add_qemu_arg -M virt,acpi=off,gic-version=3 -cpu host -enable-kvm
add_qemu_arg -smp 2 -m $VM_MEMORY -overcommit mem-lock=on
add_qemu_arg -M confidential-guest-support=rme0
add_qemu_arg -object rme-guest,id=rme0,measurement-algo=sha512,num-pmu-counters=6,sve-vector-length=256
add_qemu_arg -device virtio-net-pci,netdev=net0,romfile=""
add_qemu_arg -netdev user,id=net0

if $USE_VIRTCONSOLE; then
    add_kernel_arg console=hvc0
    add_qemu_arg -nodefaults
    add_qemu_arg -chardev stdio,mux=on,id=hvc0,signal=off
    add_qemu_arg -device virtio-serial-pci -device virtconsole,chardev=hvc0
else
    add_kernel_arg console=ttyAMA0 earlycon
    add_qemu_arg -nographic
fi

if $USE_EDK2; then
    add_qemu_arg -bios $EDK2
fi

if $DIRECT_KERNEL_BOOT; then
    add_qemu_arg -kernel $KERNEL
else
    $USE_INITRD && echo "Initrd requires direct kernel boot" && exit 1
fi

if $USE_INITRD; then
    add_qemu_arg -initrd $INITRD
else
    add_qemu_arg -device virtio-blk-pci,drive=rootfs0
    add_qemu_arg -drive format=raw,if=none,file="$DISK",id=rootfs0
    add_kernel_arg root=/dev/vda2
fi

$USE_EDK2 && $USE_VIRTCONSOLE && ! $USE_INITRD && \
    echo "Don't forget to add console=hvc0 to grub.cfg"

if $DIRECT_KERNEL_BOOT; then
    set -x
    qemu-system-aarch64 $QEMU_ARGS  \
        -append "$KERNEL_ARGS"      \
            </dev/hvc1 >/dev/hvc1
else
    set -x
    qemu-system-aarch64 $QEMU_ARGS  \
            </dev/hvc1 >/dev/hvc1
fi

The -M confidential-guest-support=rme0 and -object rme-guest,id=rme0,measurement-algo=sha512,num-pmu-counters=6,sve-vector-length=256 parameters declare this as a Realm VM and configure its parameters. Do note that the syntax will change as we aim to reuse existing QEMU parameters (notably SVE and PMU)VM and configure its parameters.

Save this as executable in the shared folder and in the host, launch it with:

Code Block
/mnt/realm.sh

...

Code Block
# RMI (Realm Managementsh

You should see RMM logs in the Firmwareterminal:

Code Block
# RMI (Realm Management Interface) is the protocol that host uses to
# communicate with the RMM
SMC_RMM_REC_CREATE            45659000 456ad000 446b1000 > RMI_SUCCESS
SMC_RMM_REALM_ACTIVATE        45659000 > RMI_SUCCESS

# RSI (Realm Service Interface) is the protocol that the hostguest uses to
# communicate with the RMM
SMC_RMMRSI_RECABI_CREATE VERSION           45659000 456ad000 446b1000 > RMI_SUCCESSd0000
SMC_RMMRSI_REALM_ACTIVATECONFIG          4565900041afe000 > RMI_SUCCESSRSI_SUCCESS
SMC_RSI_IPA_STATE_SET      # RSI (Realm Service40000000 Interface)60000000 is1 the0 protocol> that the guest uses to
# communicate with the RMM
SMC_RSI_ABI_VERSION           > d0000
SMC_RSI_REALM_CONFIG          41afe000 > RSI_SUCCESS
SMC_RSI_IPA_STATE_SET         40000000 60000000 1 0 > RSI_SUCCESS 60000000

Followed a few minutes later by the guest kernel starting in the Realm terminal.

Launching a Realm guest using the KVMTool

Code Block
lkvm run --realm -c 2 -m 2G -kRSI_SUCCESS 60000000

Followed a few minutes later by the guest kernel starting in the Realm terminal.

Launching a Realm guest using the KVMTool

Code Block
lkvm run --realm -c 2 -m 2G -k /mnt/out/bin/Image -d /mnt/out-br/images/rootfs.ext4 --restricted_mem -p "console=hvc0 root=/dev/vda" < /dev/hvc1 > /dev/hvc1

Launching a Realm guest using cloud-hypervisor

This example uses a macvtap interface to connect the guest to the host network. CONFIG_MACVTAP needs to be 'y' in the host kernel config.

Code Block
ip link add link eth0 name macvtap0 type macvtap
ip link set macvtap0 up

tapindex=$(cat /sys/class/net/macvtap0/ifindex)
tapaddress=$(cat /sys/class/net/macvtap0/address)
tapdevice="/dev/tap$tapindex"

/mnt/out/bin/cloud-hypervisor --platform arm_rme=on --kernel /mnt/out/bin/Image -d-disk path=/mnt/out-br/images/rootfs.ext4 -p-cpus boot=2 --memory size=512M --net fd=3,mac=$tapaddress --cmdline "console=hvc0 root=/dev/vda" < /dev/hvc1 > /dev/hvc1 3<>$tapdevice

Running edk2 as a guest

Enable USE_EDK2 to boot the Realm guest with the edk2 firmware. It can either load kernel and initrd through the FwCfg device provided by QEMU (DIRECT_KERNEL_BOOT=true), or launch a bootloader from a disk image (see grub2 above). When booting the kernel directly, edk2 measures the kernel, initrd and parameters provided on the QEMU command-line and adds them to the Realm Extended Measurement via RSI calls, so that they can be attested later.

...

  • Disable USE_VIRTCONSOLE in order to see all boot logs. Doing this enables the emulated PL011 serial and is much slower. Although edk2 does support virtio-console, it doesn’t display the debug output there (but you’ll still see RMM logs showing progress during boot).

  • When booting via grub2, the kernel parameters are stored in grub.cfg which is copied from board/aarch64-efi/grub.cfg by the buildroot script board/aarch64-efi/post-image.sh. Bu default the kernel parameters do not define a console, so Linux will determine the boot console from the device tree’s /chosen/stdout-path property, which QEMU initializes to the default serial console. So if you want to boot with virtconsole, add console=hvc0 to board/aarch64-efi/grub.cfg before making buildroot /chosen/stdout-path property, which QEMU initializes to the default serial console. So if you want to boot with virtconsole, add console=hvc0 to board/aarch64-efi/grub.cfg before making buildroot.

Attestation Proof of Concept

A demonstration application called cca-workload-attestation has been integrated to the root file system. From a Realm VM, it provides users with the capability to query the RMM for a CCA attestation token that can either be printed to the console or saved to a file. It also demonstrates a typical interaction with an attestation service by communicating the CCA attestation token to a local instance of the Veraison services. Details on the cca-workload-attestation, the Veraison services and the endorser that populate the endorsement values can be found here.

Tips

Automate some things in the host boot

...