In order to replicate the HPC Lab's infrastructure and test it thoroughly, an exact replica of the lab's infrastructure has been created as a libvirt managed cluster of VMs.
This article will detail the steps necessary to build one such testing lab on your own workstation.
Before starting, the laptop configuration where the test lab is running is as following :
Linux 4.9.0-6-amd64 - Debian (stretch) 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux
Intel(R) Core(TM) i7-4720HQ CPU
16 GB of RAM (it ran on 8GB as well)
Libvirt setup
In order to deploy and manage the VMs, the HPC infrastructure team recommends using virt-manager.
Installing this package will install all of libvirt's and qemu's dependencies (and those packages as well of course).
root@localhost:~# apt install -y virt-manager
At the moment, the VMs used as provisioning subjects (replacing the HPC nodes of the real lab) are x86_64 as it doesn't make a difference at the moment,
and the qemu-system-arm64 as distributed by the packages installed via the debian repos is not stable.
Despite this, to follow the real nodes' behaviour more closely, they are using UEFI firmware (via OVMF). To PXE boot, one also needs to update the network card roms to iPXE (might not be required, but highly recommended).
MrP's and Jenkins' VMs are not EFI enabled on the hpc-admin, so they don't need to be on the testing rig (and this is not required nor useful).
root@localhost:~# apt install -y ipxe-qemu ovmf
To enable EFI on a VM, you have to select the option to "Customize configuration before install", and then edit the firmware option and select x86_64-efi (OVMF.fd).
WARNINGS: Please ensure you have the latest (at least 3.0.0) version of libvirt to automatically use '-pflash' option to run EFI firmware as this is the most stable and feature-complete.
Please ensure the video card mode of the EFI enabled VM is set to VGA, and the Disk to IDE.
You cannot clone a BIOS based VM's disk and use it on a EFI VM and expect it to work out of the box. EFI will not pick up the grub and you will be on the shell. Restoration might be possible but outside of the scope of this page.
Network Setup
Now that libvirt and qemu are properly setup, it is time for the network configuration.
Using the virt-manager, go into the Edit > Connection Details > Virtual Networks (tab), click on the Plus sign to add a new network, and create :
- A 'mrp_provisioning' network : 10.40.0.0/16, No DHCP, NAT forwarding enabled.
- A 'bmcmrp' network : 10.41.0.0/16, No DHCP, Internal networking, internal and host only routing.
These networks replicate the two VLANs in use in the HPC lab.
WARNING: Note that you won't be able to have both the VPN connection to the actual lab up, and those networks started at the same time as Linux, despite being wonderful, cannot magically guess which 10.40 or 10.41 you want your packets to be routed to.
To mitigate this we recommend not enabling autostart of the libvirt networks. To start them use the command :
root@localhost:~# virsh net-start $(VM_NETWORK_NAME)
To stop them, use the command :
root@localhost:~# virsh net-destroy $(VM_NETWORK_NAME)
Without the networks started, you will not be able to start the VM's whose interfaces are linked to those networks.
MrP and Jenkins setup
Once this is done, all that is left to do is to setup the MrP and Jenkins machines, as well as the BMC emulation for the testing HPC nodes.
Clone the repo containing the labconf setup files as detailed on the 'New Lab Setup' article found here. As this is a testing lab, use the (at the moment) branch named : 'bger', or at least look at the differences between the production/master branch and this one.
Note : The testing/bger branch of the labconf repo has the latest versions of everything and might be unstable.
As of the writing of this article, the scripts found in the kvm directory cannot be used to properly setup the VMs on the testing rig.
We recommend that you open those and manually setup the VMs following the general setup of the production machines. Installation is standard, use the same IPs as on the production lab, i.e. :
- For the MrP machine : 10.40.0.11 on the mrp_provisioning network, and then configure the bmcmrp network interface to be 10.41.0.10
- For the Jenkins machine : 10.40.0.12 on the mrp_provisioning network, and no interface on the bmcmrp network.
Once the machines have Debian installed, install your ssh keys (as virt-manager graphical console is sometimes quirky and does not allow for copy pasting).
Now you can follow the instructions of the New Lab Setup article concerning Ansible setup of the VMs, except for some details concerning MrP.
MrP Ansible setup
Now that the machines are running and that network is configured, you can apply the Ansible scripts.
To enable automatic pxebooting of the simulated HPC nodes you do need a way to tell libvirt to do this.
OpenIPMI does come with an ipmi_sim tool allowing you to send ipmi commands to the vms, and it mostly works, but it has its quirks, and it does not support pxebooting.
To do this, one needs to modify the xml configuration of the pxebooted VM. This requires either : binding to libvirt's API to fetch the xml or accessing the virsh tool and fetching the xml from that.
Due to the ignorance of the TLS support of libvirt API, a tool was created, named libvirt_http_mc that runs a python HTTP webservice on the libvirt/QEMU hypervisor host that calls the virsh tool and reorders the XML.
Upstreaming the support for libvirt 'BMC' in MrP is ongoing, and rewriting of the whole tool to do this via qemu+tls protocol and python API call is also ongoing.
At the moment this is still not ready, so we will proceed to detail the setup of MrP with libvirt_http_mc.
First, open up the Ansible playbook mrp_setup.yml which can be found in ans_setup_mrp.
Note the name of the interfaces on your MrP VM connected to both mrp_provisioning and bmcmrp networks. On the production machines those are respectively 'ens2' and 'ens3'. It has been our experience that on the testing VM lab those can be 'ens9' and 'ens10'.
Replace 'ens2' and 'ens3' accordingly (using :%s/ens2/ens9/gc in vim for example). If you are using (and you should be at least looking at it) the 'testing' or 'bger' branch of the labconf repo, those interfaces will be named 'ens9' and 'ens10' by default.
Now, to use libvirt_http_mc you need the 0.3.0 version of MrP released here ; https://github.com/BaptisteGerondeau/mr-provisioner/releases/download/v0.3.0bger/mr-provisioner-0.3.0.tar.gz
WARNING : THIS RELEASE IS NOT OFFICIAL, UPSTREAMING OF LIBVIRT 'BMC' SUPPORT IS ONGOING. DO NOT POST ISSUES ON THE OFFICIAL Mr Provisioner GITHUB ABOUT THIS RELEASE.
Modify the mrp_setup.yml playbook accordingly (if not already using the testing/bger branch).
Now you can run the MrP Ansible to setup the MrP on the VM. (do not forget to update your /etc/ansible/hosts file and copy your ssh key into the vm for root).
Libvirt_http_mc
Final step is to setup libvirt_http_mc. First, read the README for more information about the tool : https://github.com/BaptisteGerondeau/libvirt_http_mc/blob/master/README.md
To install dependencies :
root@localhost:~# apt install -y python3 python3-pip root@localhost:~# sudo python3 -m pip install cherrypy requests
Now clone, go into the directory and launch :
root@localhost:~# git clone https://github.com/BaptisteGerondeau/libvirt_http_mc.git root@localhost:~# cd libvirt_http_mc root@localhost:~# sudo python3 libvirt_mc.p
NOTE : Please ensure that your firewall is not blocking INPUT traffic on the 9001 port.
Please ensure that VMs that you want to be PXEbooted have the pxe network interface enable in the 'Boot Options' menu of virt-manager.
Please ensure that the VMs have the same name both in MrP and in virt-manager.
Configuring MrP for libvirt_http_mc
Now that everything is running, create a test VM that will serve as a dummy HPC node to be provisioned. Create an interface on the mrp_provisioning network, and enable it in the Boot Options menu.
In MrP, add a new BMC, using BMC Type 'libvirt', and address 10.40.0.1 (or assign an IP to the virbr on the bmc_network and use that one). Password and username are useless.
Add the new test machine and name it exactly as it shows up in virt-manager, assign the newly created libvirt bmc to it. If everything works, the machine should be accessible in the Machines menu.
Note : You can assign multiple machines to the same BMC !