[ Back to index ]
We use cm docker
to run the Intel implementation to avoid compilation problems with any host OS dependencies.
cm docker script --tags=run-mlperf,inference,_find-performance \
--scenario=Offline --model=bert-99 --implementation=intel-original --backend=pytorch \
--category=datacenter --division=open --quiet
- Use
--division=closed
to run all scenarios for the closed division (compliance tests are skipped for_find-performance
mode)
cm docker script --tags=run-mlperf,inference,_submission,_all-scenarios \
--model=bert-99 --implementation=intel-original --backend=pytorch \
--category=datacenter --division=open --quiet
Follow this guide to generate the submission tree and upload your results.
Check the MLCommons Task Force on Automation and Reproducibility and get in touch via public Discord server.