Enabling RB5 for Orko Demo Proposal

The idea for this would be to replicate the main AVA Orko demo on the RB5 platform where:

  • Same host TRS image

  • Same Android guest image

  • Platform specific System Ready firmware

  • Utilising Gunyah instead of Xen as the hypervisor

Thus demonstrating a common Android guest utilising a virtualised set of multi-media peripherals able to run

The Hardware

The RB5 uses QC’s Kryo SoC with a 1+3+4 core layout providing main, power and background cores. The GPU is QC’s own Adreno chipset which is well supported by Mesa’s Freedreno driver. Gunyah already supports the platform.

Baseline Requirements

To be viable we make the following assumptions about the platform.

  • A SystemReady boot loader

  • A from source build of Gunyah

  • We can install a common TRS image on the system

  • We can run glmark2-vulkan test suite on TRS

    • (nb: we need Vulkan 1.2 support for virtio-vulkan)

  • We have a working integration of Gunyah and QEMU

Work Packages

Enabling in TRS

Adding a “meta-qcom” layer which would allow us to:

  • build Gunyah

  • apply in-flight Gunyah patches for Linux

  • apply in-flight Gunyah patches for QEMU

  • enable Freedreno drivers for kernel + mesa

  • generate an image suitable for installation

Gunyah Work

As we currently understand it Gunyah is launched by the firmware and then that loads the primary Linux image. It would be a nice to have if we could follow the same grub based boot-flow as Xen on AVA but its not critical. We would however want all the SoC device support in Gunyah to be in the open source code base however its not currently clear what the SoC specific changes are needed for each platform. Usually the serial ports and SMMU are the main hypervisor BSP components and as I understand it the Kryo uses standard Arm IP blocks for its SMMU.

Porting Gunyah to AVA would be another task which may be helpful in comparing and debugging between the two systems but we don’t currently understand what might be required for that.

For virtio-gpu we need to present a PCI interface to the Android guest. This doesn’t need to be a fully virtual PCI interface managed by the hypervisor so we should be able to use QEMU’s existing PCI model.

As we are finding with Xen there will certainly be some rough edges to work out while debugging so would need a Gunyah/QEMU engineer who is able to debug the whole VirtIO transaction from guest to QEMU.

Network and Block backends

For simplicity we are using Xen’s PV backend for net and block stuff in our demo. For Gunyah I guess we need to use full VirtIO via QEMU?

Enabling Mixed Criticality

The mixed criticality demo is fairly simple and the RTOS talking to the system timer and measuring the latency of its response to run a task. We don’t expect to need to make changes to the workload.

However to support mixed criticality Gunyah needs to support vCPU pinning with no scheduling of other VMs on those vCPUs. In the Xen world this is refered to as “nullsched”.

There would be some porting work to support the OpenAMP-like interface across shared memory to read the status of the RTOS demo.