[ Back to MLPerf inference benchmark index ]
You should use the master branch of MLCommons inference repo for the submission checker:
cmr "generate inference submission" \
--clean \
--preprocess_submission=yes \
--run-checker \
--submitter=CTuning \
--tar=yes \
--env.CM_TAR_OUTFILE=submission.tar.gz \
--division=open \
--category=edge \
--env.CM_DETERMINE_MEMORY_CONFIGURATION=yes \
--quiet
- Use
--division=closed
to generate a closed division result - Use
--category=datacenter
to generate results for datacenter category - Use
--hw_notes_extra
option to add your name to the notes like--hw_notes_extra="Result taken by NAME"
- Use
--hw_name="My system name"
to give a meaningful system name. Examples can be seen here - Use
--submitter=<Your name>
if your organization is an official MLCommons member and would like to submit under your organization
The above command should generate "submission.tar.gz" if there are no submission checker issues and you can upload it to the
- First, create a fork of this repo.
- If you have not set up GIT config already please do
git config --global user.name "[YOUR NAME]" git config --global user.email "[YOUR EMAIL]"
- Then run the following command after replacing
--repo_url
with your fork URL.
cmr "push github mlperf inference submission" \
--repo_url=https://github.com/ctuning/mlperf_inference_submissions_v4.0 \
--commit_message="Results on <HW name> added by <Name>" \
--quiet
Create a PR to the cTuning repo
You can upload the submission.tar.gz
file generated by the previous command to the submission UI.
Check the MLCommons Task Force on Automation and Reproducibility and get in touch via public Discord server.