/
AI on Servers Team

AI on Servers Team

Summary

AI has been featuring in the media over the years with anticipation/expectation that we will have a fully intelligent, conversant, emotionally aware, context sensitive system. To name just a few parameters of a full AI.

The focus in industry has been to target the low-hanging-fruit that the subset of AI, Machine Learning, offers. With massive computational needs and vast datasets to create models ML Training requires server-grade solutions.

Historically, ARM-based SoC have been aimed at the embedded, low-power market which utilise the pre-existing models to make inferencing decisions.

Now that we have Marvell's ThunderX2, Ampere's eMAG, Fujitsu's A64FX and others there are server-grade solutions within the ARM ecosystem.

The immediate requirement is to ensure solutions in the ARM ecosystem are treated as first-class citizens within the available ML Frameworks, such as Tensorflow and PyTorch for example.

With the public and investors being marketed to that 'AI' is being actively worked on but only the subset Machine Learning producing results there is concern that focus on 'AI' as a whole may dwindle due to over-hype and under-delivery.

Therefore, the 2nd priority, but to be carried out in parallel, is to work at expanding solutions beyond the ML envelope. Initially, this means utilising the existing ML Frameworks in novel ways to look at providing a software infrastructure that aims more towards enabling 'emergent behaviour'. The first step of which are Spiking Neural Networks.

Benefits

If the ARM ecosystem is considered a first-class citizen in the use of existing ML Frameworks this implicitly draws interest in ARM-based hardware and software vendor solutions, increasing the potential for revenue generation.

By leading the way towards the wider field of AI the ARM ecosystem will be seen as delivering on the previous hype, promoting further investment. 

Detailed Description

This web-page aims to capture the links to the various parallel activities that could be drawn upon to achieve the end results. Please feel free to add/edit links to topics that further the summarised priorities above.

NOTE: Whilst this topic has been initiated by HPC-SIG within LDCG we welcome members from all Linaro groups to contribute.


DescriptionLinkNotes
LBI-27LBI-27 AI on ServersProposal for resource allocation. REQUEST: Team to follow LBI-27 and add comments/tailor associated document to promote activity.
AI on Servers Team Kickoff meetinghttps://docs.google.com/document/d/1U9sEJbTwa8Tf7p-Tvj9PGnbBYpromWeNQDneApXmvIo/edit?ts=5e206136#heading=h.q3lsailew5xb
LDCG SC/HPC-SIG Kickoff meeting 2020-01-16 HPC Meeting Agenda/Minutes
Tensorflow CIhttps://github.com/Linaro/hpc_tensorflowciScript to implement a Jenkins task for building a Tensorflow CI
Pytorch CIhttps://github.com/Linaro/hpc_pytorchciHolding page until CI is implemented - code contribution welcome
AI on Servers primary Github pagehttps://github.com/Linaro/aionserversAdd your Open Source code examples here that promote novel use of ML Frameworks that aspire towards the wider topic of AI.
Julia languagehttps://www.hpcwire.com/2020/01/14/julia-programmings-dramatic-rise-in-hpc-and-elsewhere/Information related to the rise in Julia for HPC use. (Naturally other languages may be used)





<end>