TMP MLPerf Inference
Preparing the virtual environment for running the benchmarks. This part is common to all benchmarks and only needs to be performed once.
Install the prerequisites
cd ~
mkdir tf_venv
cd tf_venv
python3 -m venv .
source bin/activate
python -m pip install --upgrade pip wheel
python -m pip install google protobuf==3.19.4
python -m pip install cython absl-py pillow
python -m pip install --extra-index-url https://snapshots.linaro.org/ldcg/python-cache/ numpy==1.19.5
python -m pip install --extra-index-url https://snapshots.linaro.org/ldcg/python-cache/ matplotlib
python -m pip install --no-binary pycocotools pycocotools
python -m pip install ck
ck pull repo:ck-env
python -m pip install scikit-build
python -m pip install --extra-index-url https://snapshots.linaro.org/ldcg/python-cache/ tensorflow-io-gcs-filesystem==0.21.0 h5py==3.1.0
python -m pip install tensorflow-aarch64==2.7.0
The last line above can be changed to point to the version of TensorFlow that you wish to benchmark. The virtual environment can be created wherever you like and called whatever you like.
Install and build MLPerf Inference
cd ~/src
git clone https://github.com/mlcommons/inference.git
cd inference
git checkout r1.1
git cherry-pick -n 215c057fc6690a47f3f66c72c076a8f73d66cb12
git submodule update --init --recursive
cd loadgen
CFLAGS="-std=c++14 -Wp,-U_GLIBCXX_ASSERTIONS" python setup.py develop
cd ../vision/classification_and_detection/
python setup.py develop
Installing and building MLPerf Inference should be done while the virtual environment created above is active. Of course you are free to locate it wherever you like, it does not need to be in ~/src.