...
Code Block |
---|
sudo apt-get install git-email sudo apt-get install libaio-dev libbluetooth-dev libcapstone-dev libbrlapi-dev libbz2-dev sudo apt-get install libcap-ng-dev libcurl4-gnutls-dev libgtk-3-dev sudo apt-get install libibverbs-dev libjpeg-dev libncurses5-dev libnuma-dev sudo apt-get install librbd-dev librdmacm-dev sudo apt-get install libsasl2-dev libsdl2-dev libseccomp-dev libsnappy-dev libssh-dev sudo apt-get install libvde-dev libvdeplug-dev libvte-2.91-dev libxen-dev liblzo2-dev sudo apt-get install valgrind xfslibs-dev |
Step 2: Download QEMU source code. Now we maintain the QEMU repo in: https://gitlab.com/Linaro/blueprints/automotive/xen-aosp/qemu . Akashi-san maintains his QEMU repo for enablig virtual GPU, but the building for QEMU is dependent on other libs, e.g. virglrenderer, mesa libs. I think the better way is to land the changes in a central place. So this document use the previous QEMU repo for building:
Code Block |
---|
$ mkdir /home/leo.yan/local # Create a folder to install QEMU binaries
$ https://gitlab.com/Linaro/blueprints/automotive/xen-aosp/qemu
$ cd qemu
$ mkdir build
$ cd qemu
$ ../configure --prefix=/home/leo.yan/local --target-list=aarch64-softmmu,i386-softmmu --enable-kvm --enable-xen --disable-werror --enable-slirp --enable-opengl --enable-virglrenderer --enable-gtk --enable-sdl
$ make
$ make install |
As result, you can see QEMU libs and binaries are installed in the folder /home/leo.yan/local/
:
Code Block |
---|
leo.yan@ampere-bullseye:~$ ls /home/leo.yan/local/
bin include libexec share var |
Build Xen toolkit
Step 1: Download Xen repo:
Code Block |
---|
$ git clone https://gitlab.com/Linaro/blueprints/automotive/xen-aosp/xen |
Step 2: Configure:
Code Block |
---|
$ ./configure --prefix=/usr --libdir=/usr/lib64 --disable-docs \
--disable-golang --disable-ocamltools --enable-ioreq-server \
--with-system-qemu=/home/leo.yan/local/bin/qemu-system-aarch64 |
Step 3: Build Xen. In this step, it’s suggested to build Debian package:
Code Block |
---|
$ make debball |
After the building, the Debian package and built files are located in dist
folder:
Code Block |
---|
leo.yan@ampere-bullseye:~/xen$ cd dist/
leo.yan@ampere-bullseye:~/xen/dist$ ls
COPYING README install install.sh xen-upstream-4.18.0.deb |
Step 4: Install Xen in system.
There have two ways to install Xen, one is to install Debian package:
Code Block |
---|
$ dpkg -i xen-upstream-4.18.0.deb |
Or, you can use the script install.sh
for the installation:
Code Block |
---|
$ sudo ./install.sh |
As the result, you can see Xen toolkit binary xl
is installed in the folder /usr/local/sbin/xl
and the service script is placed in /etc/init.d/xencommon
.
Setting up after system booting up
On the AVA board in the Cambridge lab, the grub has an entry DEBUG: boot with Xen hypervisor
. You need to select this entry to boot Xen hypervisor. Then wait for Xen dom0 booting, you can use the serial to setup dhcp for the networking port:
Code Block |
---|
$ sudo dhclient enP3p2s0f0 |
Afterwards, you can connect the board with ssh
.
You also need to initialize the Xen script with the command:
Code Block |
---|
$ sudo /etc/init.d/xencommons start
Setting domain 0 name, domid and JSON config...
Dom0 is already set up
Starting xenconsoled...
Starting QEMU as disk backend for dom0
qemu-system-aarch64: unsupported machine type
Use -machine help to list supported machines |
Issue: here it reports qemu-system-aarch64: unsupported machine type
, now we can ignore this error as I don’t see issue for Xen dom0 and domU. Please see below testing result.
Testing
The command sudo xl list
can be used to do a smoke test:
Code Block |
---|
leo.yan@ampere-bullseye:~/xen/dist$ sudo xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 32768 80 r----- 45.0 |
A Xen configuration file, a kernel image and ramdisk image are placed in the folder /home/leo.yan/test_xen_vm
:
Code Block |
---|
leo.yan@ampere-bullseye:~$ cd test_xen_vm/
leo.yan@ampere-bullseye:~/test_xen_vm$ ls
Image.gz startvm.cfg xen_guest_image.cpio.gz |
You can launch the Xen VM with commands:
Code Block |
---|
sudo xl -vvv create startvm.cfg -c |
As a result, the xl list
command can show the new created virtual machine:
Code Block |
---|
$ sudo xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 32768 80 r----- 47.1
guest 1 512 2 -b---- 0.6 |