Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

4This This page describes a validation plan for OpenHPC deployment on AArch64 systems. It is not restrict to hardware, cloud or emulation systems and should be reasonably independent of the media.

...

To achieve the final stage of continuous integration, we'll need to go through a set of steps, from a fully manual setup to fully automated. Not all steps will need to be done in this effort, as many of them already exist for other projects, and we should aim to not duplicate any previous (or future) efforts in achieving our CI loop.

Table of Contents

First Iteration

The first steps are the bootstrap process, where we'll define the best practices, consult other Linaro teams, the upstream OpenHPC community and the SIG members for the multiple decisions we'll need to take to guide us to a meaningful validation process without duplication of work or moving in the wrong direction.

...

Once these steps are reproduced by hand, and documented thoroughly, we can start automating the process.

Current Infrastructure

OpenHPC already has good part of that infrastructure ready. The PDF document has a full description on Appendix A on how to extract automation scripts from the docs-ohpc package.

This is what is used by the OpenHPC CI, which tries to run more than just "make check" tests by using the test-suite-ohpc package.

Any change we need to do to the automation and testing, should first be done to those packages, and only in case it's not possible, we should write our own scripts, hopefully shared on their own repository.

We may not be able to share our resources with them (license, NDA), but if we do replicate their setup (Jenkins, etc.), we should use the same sources, repositories and configurations.

Acceptance Criteria

The outcomes of this step are:

...

We shouldn't focus on a single one, as all of them are important, but we should prioritise them and pick the more important to do first, and let the others after step #2 is finished in its first iteration.

Current Infrastructure

Right now, what we have is a cloud project on the Linaro Developer Cloud. For now, this allows us to deploy virtual images on AArch64 hardware, create local networks and try out CentOS+OpenHPC installations and mini clusters.

This is the option 2 above and it's probably the closest we'll get from production environments, at least at such an early stage. Progress will be updated in Jira.

Acceptance Criteria

The outcomes of this step are:

...

Refining the OpenHPC options

We should be able to specify a few options for the images, regardless whether they're built on the fly or pre-defined images.

The options could be:

  • Which compiler to use, GCC, LLVM or some proprietary one available at some URL
  • Which libraries to use for OpenMP, MPI, etc.
  • Additional components (like monitoring, fast networking, etc.)

These options should be available at dispatch time (Jenkins?), and trigger jobs could pick them at different values to spawn a matrix validation on every update event, for instance a pre-commit hook from Gerrit, a new upstream release, new versions of packages made available, etc.

Improving test coverage

The framework defined in step #3 is an over-simplified version of what we may need to have, because it's expecting OpenHPC (or the base OS) to do the package management dependency decisions. This strategy won't work if we have to try external packages (for licensing reasons).

OpenHPC has a way to install third-party packages, and we may come up with a packaging that exposes the dependencies for each individual package, but that process needs to be well defined and possible to do for all third-party software the SIG members may want to add.

In addition to that improvement, we need to start populating the validation with a lot more tests, hopefully most of them as OpenHPC packages, with their own validation process, pass/fail scripts, etc.

So, while there probably won't be a lot of work in the OpenHPC infrastructure, especially related to our validation process, there will be substantial upstreaming work to get the packages in the releases with a full validation process.

We may also need to have a local repository (OpenHPC also allows some of that), for the experimental packages that haven't made into an upstream release yet.

Benchmarking

The final piece of the puzzle is how to measure performance in a CI loop.

When creating the packages on the task above, we should take care to enable them to run in two modes: validation and benchmark.

The validation style will just run a small subset that hopefully encompass most (if not all) of the functionality, so that we can have a quick return on the status of those features as work / doesn't work.

The benchmark style will run a subset of those features in larger loops and with internal timers, so that we can print out the run-time (or specific counters per second) into the test output.

Not all programs can have such an intrusive change, so we should also allow for simple "execution time", and cope with the noise by running it multiple times.

Both validation and benchmarking modes need to run with at least GCC and Clang, GOMP and libomp, so that we can identify any issues that arise in due time.

The second part of this task involved aggregating all the data into a database, so that we can track performance regressions.

Benchmark databases are generally large, no-SQL based and most of the time hand-made. Other team (ex. toolchain) already have large experience with benchmarking and tracking, so we should leverage their knowledge and existing tools.