We're using the openstack developer clouds to test cluster deployments, but since that's a new product, it has its own teething issues. Here are a few tricks to make it work for HPC.
Preparing the Environment
Creating a "hopbox" Instance
Since we only have one public IP per lab, we need a machine we can SSH into and then hop to others. These are the "Hopboxes" and they can be any flavour of Linux.
A stable Debian is recommended, as you won't have to change it or re-install many packages, and you need a stable and secure setup.
The hopbox machine should be a tiny instance (one core, 512MB RAM), have SSHD running and nothing else. Default Debian SSHD config forbids password login, so that's solved.
To SSH into other machines, you'll need two things:
- Add the machine names and their IPs in
/etc/hosts
, so that once you ssh into the hopbox, you can ssh into the machines via their names. - Add as SSH-hop configuration to your local
.ssh/config
, so that you can directly ssh into the machines, via the hopbox.
The SSH config should look something like:
Host hopbox
User <your-linaro-username>
HostName <the-public-IP>
Host ohpc-* *.cloud
User <your-linaro-username>
StrictHostKeyChecking no // Only here, not in the hopbox
ProxyCommand ssh hopbox nc -q0 %h %p 2>/dev/null
The hopbox should be pretty stable and not change the SSHD keys, so if you get a warning that it changed, something is wrong. The safest way is to login the cloud interface and restore the instance to a known snapshot.
But the internal instances can change all the time, so it's simpler to have it not checking the host keys.
The local machines can be called whatever you want, but make sure to match the pattern with wildcards (*). The two common ways is to have a prefix (ex. ohpc-*) or a suffix (ex. *.cloud), or both. These names are the ones that you have to put on your /etc/hosts
file.