Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the open dataset requirement #285

Merged
merged 1 commit into from
Nov 28, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions inference_rules.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -322,14 +322,13 @@ For 3DUNet, the logical destination for the benchmark output is considered to be
==== Relaxed constraints for the Open division

1. An Open benchmark must perform a task matching an existing Closed benchmark, and be substitutable in LoadGen for that benchmark.
1. The accuracy dataset must be the same as used in an existing Closed benchmark, or must be pre-approved and added to the following list: ImageNet 2012 validation dataset for Image Classification; COCO 2017 validation dataset for Object Detection. From v3.0, if a submitter provides any results with any models trained on a pre-approved dataset,
the submitter must also provide at least one result with the corresponding Closed model trained
(or finetuned) on the same pre-approved dataset, and instructions to reproduce the training (or finetuning) process.
1. The validation dataset must be the same as used in an existing Closed benchmark, or must be pre-approved and added to the following list: ImageNet 2012 validation dataset for Image Classification; COCO 2017 validation dataset for Object Detection.
When seeking such pre-approval, it is recommended that a potential submitter convincingly demonstrates the accuracy of the corresponding Closed model on the same validation dataset, which may involve retraining or finetuning the Closed model if required.
1. Accuracy constraints are not applicable: instead the submission must report the accuracy obtained.
1. Latency constraints are not applicable: instead the submission must report the latency constraints under which the reported performance was obtained.
1. Scenario constraints are not applicable: any combination of scenarios is permitted.
1. A open submission must be classified as "Available", "Preview", or "Research, Development, or Internal".
1. The model can be of any origin (trained on any dataset, quantized in any way, and sparsified in anyway).
1. The model can be of any origin (trained on any dataset, except the validation dataset; quantized in any way; sparsified in any way).

==== Additional inference parameters

Expand Down