TMP Original TensorFlow Home

Introduction

The TensorFlow project is hosted out of the LDCG segment group. The project involves work within strategic frameworks that broadly empower HPC and AI computing. The effort involves simple enablement to optimized efforts to maximize performance.

Scope

Data Center hardware powered by Arm designs could be found across the ecosystem. From the very high end Fukaku super computer, cloud computing, and other forms, these devices can be utilized for AI training and inference. These devices are often multi node, many cores, with or without specialized offload.

Edge devices are comprised of a wide variety of Cortex-A equipped hardware. These devices might be running Android, Linux and every other operating systems. Within this class of devices their capability to performance AI workloads such as inference can vary greatly. On the low end memory might be tight and no offload exists to on the high end, they can be server like with offload and plenty of resources. Indeed Edge devices since they are Cortex-A can have quite a bit in common with HPC/Server with the exception that one does not generally perform training on an edge device.

See frameworks page for details on what we are involved in.

Roadmap

TBD…

Current Plan

Edit the macro below and add the appropriate project in the JQL query

key summary type created updated due assignee reporter priority status resolution
Loading...
Refresh

Backlog

Edit the macro below and add the appropriate project in the JQL query

key summary type created updated due assignee reporter priority status resolution
Loading...
Refresh

Accomplished

Edit the macro below and add the appropriate project in the JQL query

key summary type created updated due assignee reporter priority status resolution
Loading...
Refresh

Active Members


@Andrew Goodbody

@Theodore Grey

Project Meetings

Project Contacts

Source Code

Recent PRs / RFC