From dd2daa85c814a4c7dc21dac5958456ab647bb52b Mon Sep 17 00:00:00 2001 From: cTuning Date: Fri, 8 Mar 2024 14:06:25 +0000 Subject: [PATCH] Updated docs --- .../README.md | 64 +-- .../app-mlperf-inference-dummy/README.md | 375 ++++++++++++++++++ .../app-mlperf-inference-intel/README.md | 20 +- .../README.md | 14 +- .../README.md | 14 +- .../app-mlperf-inference-nvidia/README.md | 30 +- .../app-mlperf-inference-qualcomm/README.md | 16 +- .../script/app-mlperf-inference/README.md | 39 +- .../script/run-mlperf-inference-app/README.md | 2 +- 9 files changed, 477 insertions(+), 97 deletions(-) create mode 100644 cm-mlops/script/app-mlperf-inference-dummy/README.md diff --git a/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/README.md b/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/README.md index 17ae47fd92..f1f55ad182 100644 --- a/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/README.md +++ b/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/README.md @@ -30,7 +30,7 @@ * Category: *Modular MLPerf inference benchmark pipeline.* * CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* -* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite)* * CM meta description for this script: *[_cm.json](_cm.json)* * CM "database" tags to find this script: *app,mlperf,inference,tflite-cpp* * Output cached? *False* @@ -112,9 +112,9 @@ ___ - Environment variables: - *CM_MLPERF_BACKEND*: `armnn_tflite` - *CM_MLPERF_BACKEND_VERSION*: `<<>>` - - *CM_TMP_SRC_FOLDER*: `armnn` - - *CM_TMP_LINK_LIBS*: `tensorflowlite,armnn,armnnTfLiteParser` - *CM_MLPERF_SUT_NAME_IMPLEMENTATION_PREFIX*: `tflite_armnn_cpp` + - *CM_TMP_LINK_LIBS*: `tensorflowlite,armnn,armnnTfLiteParser` + - *CM_TMP_SRC_FOLDER*: `armnn` - Workflow: @@ -132,8 +132,8 @@ ___ - Environment variables: - *CM_MLPERF_BACKEND*: `tflite` - *CM_MLPERF_BACKEND_VERSION*: `master` - - *CM_TMP_SRC_FOLDER*: `src` - *CM_TMP_LINK_LIBS*: `tensorflowlite` + - *CM_TMP_SRC_FOLDER*: `src` - Workflow: @@ -194,13 +194,13 @@ ___ * `_use-neon` - Environment variables: - - *CM_MLPERF_TFLITE_USE_NEON*: `1` - *CM_MLPERF_SUT_NAME_RUN_CONFIG_SUFFIX1*: `using_neon` + - *CM_MLPERF_TFLITE_USE_NEON*: `1` - Workflow: * `_use-opencl` - Environment variables: - - *CM_MLPERF_TFLITE_USE_OPENCL*: `1` - *CM_MLPERF_SUT_NAME_RUN_CONFIG_SUFFIX1*: `using_opencl` + - *CM_MLPERF_TFLITE_USE_OPENCL*: `1` - Workflow: @@ -216,13 +216,13 @@ ___ - Workflow: * `_int8` - Environment variables: - - *CM_MLPERF_MODEL_PRECISION*: `int8` - *CM_DATASET_COMPRESSED*: `on` + - *CM_MLPERF_MODEL_PRECISION*: `int8` - Workflow: * `_uint8` - Environment variables: - - *CM_MLPERF_MODEL_PRECISION*: `uint8` - *CM_DATASET_COMPRESSED*: `on` + - *CM_MLPERF_MODEL_PRECISION*: `uint8` - Workflow: @@ -261,21 +261,21 @@ r=cm.access({... , "compressed_dataset":...} These keys can be updated via `--env.KEY=VALUE` or `env` dictionary in `@input.json` or using script flags. -* CM_MLPERF_OUTPUT_DIR: `.` -* CM_MLPERF_LOADGEN_SCENARIO: `SingleStream` +* CM_DATASET_COMPRESSED: `off` +* CM_DATASET_INPUT_SQUARE_SIDE: `224` +* CM_FAST_COMPILATION: `yes` * CM_LOADGEN_BUFFER_SIZE: `1024` * CM_MLPERF_LOADGEN_MODE: `accuracy` -* CM_FAST_COMPILATION: `yes` -* CM_DATASET_INPUT_SQUARE_SIDE: `224` -* CM_DATASET_COMPRESSED: `off` -* CM_ML_MODEL_NORMALIZE_DATA: `0` -* CM_ML_MODEL_SUBTRACT_MEANS: `1` -* CM_ML_MODEL_GIVEN_CHANNEL_MEANS: `123.68 116.78 103.94` +* CM_MLPERF_LOADGEN_SCENARIO: `SingleStream` * CM_MLPERF_LOADGEN_TRIGGER_COLD_RUN: `0` -* CM_VERBOSE: `0` +* CM_MLPERF_OUTPUT_DIR: `.` +* CM_MLPERF_SUT_NAME_IMPLEMENTATION_PREFIX: `tflite_cpp` * CM_MLPERF_TFLITE_USE_NEON: `0` * CM_MLPERF_TFLITE_USE_OPENCL: `0` -* CM_MLPERF_SUT_NAME_IMPLEMENTATION_PREFIX: `tflite_cpp` +* CM_ML_MODEL_GIVEN_CHANNEL_MEANS: `123.68 116.78 103.94` +* CM_ML_MODEL_NORMALIZE_DATA: `0` +* CM_ML_MODEL_SUBTRACT_MEANS: `1` +* CM_VERBOSE: `0` @@ -285,7 +285,7 @@ ___
Click here to expand this section. - 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp/_cm.json)*** + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/_cm.json)*** * detect,os - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) * detect,cpu @@ -302,19 +302,19 @@ ___ * CM names: `--adr.['inference-src']...` - CM script: [get-mlperf-inference-src](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-src) * get,ml-model,mobilenet,raw,_tflite - * `if (CM_MODEL == mobilenet AND CM_MLPERF_BACKEND in ['tflite', 'armnn_tflite'])` + * `if (CM_MLPERF_BACKEND in ['tflite', 'armnn_tflite'] AND CM_MODEL == mobilenet)` * CM names: `--adr.['ml-model', 'tflite-model', 'mobilenet-model']...` - CM script: [get-ml-model-mobilenet](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-ml-model-mobilenet) * get,ml-model,resnet50,raw,_tflite,_no-argmax - * `if (CM_MODEL == resnet50 AND CM_MLPERF_BACKEND in ['tflite', 'armnn_tflite'])` + * `if (CM_MLPERF_BACKEND in ['tflite', 'armnn_tflite'] AND CM_MODEL == resnet50)` * CM names: `--adr.['ml-model', 'tflite-model', 'resnet50-model']...` - CM script: [get-ml-model-resnet50](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-ml-model-resnet50) * get,ml-model,resnet50,raw,_tf - * `if (CM_MODEL == resnet50 AND CM_MLPERF_BACKEND == tf)` + * `if (CM_MLPERF_BACKEND == tf AND CM_MODEL == resnet50)` * CM names: `--adr.['ml-model', 'tflite-model', 'resnet50-model']...` - CM script: [get-ml-model-resnet50](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-ml-model-resnet50) * get,ml-model,efficientnet,raw,_tflite - * `if (CM_MODEL == efficientnet AND CM_MLPERF_BACKEND in ['tflite', 'armnn_tflite'])` + * `if (CM_MLPERF_BACKEND in ['tflite', 'armnn_tflite'] AND CM_MODEL == efficientnet)` * CM names: `--adr.['ml-model', 'tflite-model', 'efficientnet-model']...` - CM script: [get-ml-model-efficientnet-lite](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-ml-model-efficientnet-lite) * get,tensorflow,lib,_tflite @@ -323,31 +323,31 @@ ___ * `if (CM_MLPERF_TFLITE_USE_ARMNN == yes)` * CM names: `--adr.['armnn', 'lib-armnn']...` - CM script: [get-lib-armnn](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-lib-armnn) - 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp/customize.py)*** - 1. ***Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp/_cm.json)*** + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/customize.py)*** + 1. ***Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/_cm.json)*** * generate,user-conf,mlperf,inference * CM names: `--adr.['user-conf-generator']...` - CM script: [generate-mlperf-inference-user-conf](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/generate-mlperf-inference-user-conf) * get,dataset,preprocessed,imagenet,_for.resnet50,_rgb32,_NHWC - * `if (CM_MODEL == resnet50 AND CM_MLPERF_SKIP_RUN == no) AND (CM_DATASET_COMPRESSED != on)` + * `if (CM_MLPERF_SKIP_RUN == no AND CM_MODEL == resnet50) AND (CM_DATASET_COMPRESSED != on)` * CM names: `--adr.['imagenet-preprocessed', 'preprocessed-dataset']...` - CM script: [get-preprocessed-dataset-imagenet](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-preprocessed-dataset-imagenet) * get,dataset,preprocessed,imagenet,_for.mobilenet,_rgb32,_NHWC - * `if (CM_MODEL in ['mobilenet', 'efficientnet'] AND CM_MLPERF_SKIP_RUN == no) AND (CM_DATASET_COMPRESSED != on)` + * `if (CM_MLPERF_SKIP_RUN == no AND CM_MODEL in ['mobilenet', 'efficientnet']) AND (CM_DATASET_COMPRESSED != on)` * CM names: `--adr.['imagenet-preprocessed', 'preprocessed-dataset']...` - CM script: [get-preprocessed-dataset-imagenet](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-preprocessed-dataset-imagenet) * get,dataset,preprocessed,imagenet,_for.mobilenet,_rgb8,_NHWC - * `if (CM_MODEL in ['mobilenet', 'efficientnet'] AND CM_DATASET_COMPRESSED == on AND CM_MLPERF_SKIP_RUN == no)` + * `if (CM_DATASET_COMPRESSED == on AND CM_MLPERF_SKIP_RUN == no AND CM_MODEL in ['mobilenet', 'efficientnet'])` * CM names: `--adr.['imagenet-preprocessed', 'preprocessed-dataset']...` - CM script: [get-preprocessed-dataset-imagenet](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-preprocessed-dataset-imagenet) * get,dataset,preprocessed,imagenet,_for.resnet50,_rgb8,_NHWC - * `if (CM_MODEL == resnet50 AND CM_DATASET_COMPRESSED == on AND CM_MLPERF_SKIP_RUN == no)` + * `if (CM_DATASET_COMPRESSED == on AND CM_MLPERF_SKIP_RUN == no AND CM_MODEL == resnet50)` * CM names: `--adr.['imagenet-preprocessed', 'preprocessed-dataset']...` - CM script: [get-preprocessed-dataset-imagenet](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-preprocessed-dataset-imagenet) 1. ***Run native script if exists*** - 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp/_cm.json) - 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp/customize.py)*** - 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp/_cm.json)*** + 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/_cm.json) + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite/_cm.json)*** * compile,program * `if (CM_MLPERF_SKIP_RUN != yes)` * CM names: `--adr.['compiler-program']...` diff --git a/cm-mlops/script/app-mlperf-inference-dummy/README.md b/cm-mlops/script/app-mlperf-inference-dummy/README.md new file mode 100644 index 0000000000..4d1a89773b --- /dev/null +++ b/cm-mlops/script/app-mlperf-inference-dummy/README.md @@ -0,0 +1,375 @@ +
+Click here to see the table of contents. + +* [About](#about) +* [Summary](#summary) +* [Reuse this script in your project](#reuse-this-script-in-your-project) + * [ Install CM automation language](#install-cm-automation-language) + * [ Check CM script flags](#check-cm-script-flags) + * [ Run this script from command line](#run-this-script-from-command-line) + * [ Run this script from Python](#run-this-script-from-python) + * [ Run this script via GUI](#run-this-script-via-gui) + * [ Run this script via Docker (beta)](#run-this-script-via-docker-(beta)) +* [Customization](#customization) + * [ Variations](#variations) + * [ Script flags mapped to environment](#script-flags-mapped-to-environment) + * [ Default environment](#default-environment) +* [Script workflow, dependencies and native scripts](#script-workflow-dependencies-and-native-scripts) +* [Script output](#script-output) +* [New environment keys (filter)](#new-environment-keys-(filter)) +* [New environment keys auto-detected from customize](#new-environment-keys-auto-detected-from-customize) +* [Maintainers](#maintainers) + +
+ +*Note that this README is automatically generated - don't edit!* + +### About + +#### Summary + +* Category: *Modular MLPerf benchmarks.* +* CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy)* +* CM meta description for this script: *[_cm.yaml](_cm.yaml)* +* CM "database" tags to find this script: *reproduce,mlcommons,mlperf,inference,harness,redhat-harness,redhat,openshift-harness,openshift* +* Output cached? *False* +___ +### Reuse this script in your project + +#### Install CM automation language + +* [Installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md) +* [CM intro](https://doi.org/10.5281/zenodo.8105339) + +#### Pull CM repository with this automation + +```cm pull repo mlcommons@ck``` + + +#### Run this script from command line + +1. `cm run script --tags=reproduce,mlcommons,mlperf,inference,harness,redhat-harness,redhat,openshift-harness,openshift[,variations] [--input_flags]` + +2. `cmr "reproduce mlcommons mlperf inference harness redhat-harness redhat openshift-harness openshift[ variations]" [--input_flags]` + +* `variations` can be seen [here](#variations) + +* `input_flags` can be seen [here](#script-flags-mapped-to-environment) + +#### Run this script from Python + +
+Click here to expand this section. + +```python + +import cmind + +r = cmind.access({'action':'run' + 'automation':'script', + 'tags':'reproduce,mlcommons,mlperf,inference,harness,redhat-harness,redhat,openshift-harness,openshift' + 'out':'con', + ... + (other input keys for this script) + ... + }) + +if r['return']>0: + print (r['error']) + +``` + +
+ + +#### Run this script via GUI + +```cmr "cm gui" --script="reproduce,mlcommons,mlperf,inference,harness,redhat-harness,redhat,openshift-harness,openshift"``` + +Use this [online GUI](https://cKnowledge.org/cm-gui/?tags=reproduce,mlcommons,mlperf,inference,harness,redhat-harness,redhat,openshift-harness,openshift) to generate CM CMD. + +#### Run this script via Docker (beta) + +`cm docker script "reproduce mlcommons mlperf inference harness redhat-harness redhat openshift-harness openshift[ variations]" [--input_flags]` + +___ +### Customization + + +#### Variations + + * *Internal group (variations should not be selected manually)* +
+ Click here to expand this section. + + * `_bert_` + - Workflow: + * `_gptj_` + - Workflow: + 1. ***Read "deps" on other CM scripts*** + * get,ml-model,gptj + * CM names: `--adr.['gptj-model']...` + - CM script: [get-ml-model-gptj](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-ml-model-gptj) + * get,dataset,cnndm,_validation + - CM script: [get-dataset-cnndm](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-dataset-cnndm) + * `_llama2-70b_` + - Workflow: + +
+ + + * *No group (any variation can be selected)* +
+ Click here to expand this section. + + * `_pytorch,cpu` + - Workflow: + 1. ***Read "deps" on other CM scripts*** + * get,generic-python-lib,_torch + - CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib) + * `_pytorch,cuda` + - Workflow: + 1. ***Read "deps" on other CM scripts*** + * get,generic-python-lib,_torch_cuda + - CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib) + * `_singlestream,resnet50` + - Workflow: + * `_singlestream,retinanet` + - Workflow: + +
+ + + * Group "**backend**" +
+ Click here to expand this section. + + * **`_pytorch`** (default) + - Environment variables: + - *CM_MLPERF_BACKEND*: `pytorch` + - Workflow: + +
+ + + * Group "**batch-size**" +
+ Click here to expand this section. + + * `_bs.#` + - Workflow: + +
+ + + * Group "**device**" +
+ Click here to expand this section. + + * **`_cpu`** (default) + - Environment variables: + - *CM_MLPERF_DEVICE*: `cpu` + - Workflow: + * `_cuda` + - Environment variables: + - *CM_MLPERF_DEVICE*: `gpu` + - *CM_MLPERF_DEVICE_LIB_NAMESPEC*: `cudart` + - Workflow: + +
+ + + * Group "**loadgen-scenario**" +
+ Click here to expand this section. + + * `_multistream` + - Environment variables: + - *CM_MLPERF_LOADGEN_SCENARIO*: `MultiStream` + - Workflow: + * `_offline` + - Environment variables: + - *CM_MLPERF_LOADGEN_SCENARIO*: `Offline` + - Workflow: + * `_server` + - Environment variables: + - *CM_MLPERF_LOADGEN_SCENARIO*: `Server` + - Workflow: + * `_singlestream` + - Environment variables: + - *CM_MLPERF_LOADGEN_SCENARIO*: `SingleStream` + - Workflow: + +
+ + + * Group "**model**" +
+ Click here to expand this section. + + * `_bert-99` + - Environment variables: + - *CM_MODEL*: `bert-99` + - *CM_SQUAD_ACCURACY_DTYPE*: `float32` + - Workflow: + * `_bert-99.9` + - Environment variables: + - *CM_MODEL*: `bert-99.9` + - Workflow: + * `_gptj-99` + - Environment variables: + - *CM_MODEL*: `gptj-99` + - *CM_SQUAD_ACCURACY_DTYPE*: `float32` + - Workflow: + * `_gptj-99.9` + - Environment variables: + - *CM_MODEL*: `gptj-99.9` + - Workflow: + * `_llama2-70b-99` + - Environment variables: + - *CM_MODEL*: `llama2-70b-99` + - Workflow: + * `_llama2-70b-99.9` + - Environment variables: + - *CM_MODEL*: `llama2-70b-99.9` + - Workflow: + * **`_resnet50`** (default) + - Environment variables: + - *CM_MODEL*: `resnet50` + - Workflow: + * `_retinanet` + - Environment variables: + - *CM_MODEL*: `retinanet` + - Workflow: + +
+ + + * Group "**precision**" +
+ Click here to expand this section. + + * `_fp16` + - Workflow: + * `_fp32` + - Workflow: + * `_uint8` + - Workflow: + +
+ + +#### Default variations + +`_cpu,_pytorch,_resnet50` + +#### Script flags mapped to environment +
+Click here to expand this section. + +* `--count=value` → `CM_MLPERF_LOADGEN_QUERY_COUNT=value` +* `--max_batchsize=value` → `CM_MLPERF_LOADGEN_MAX_BATCHSIZE=value` +* `--mlperf_conf=value` → `CM_MLPERF_CONF=value` +* `--mode=value` → `CM_MLPERF_LOADGEN_MODE=value` +* `--multistream_target_latency=value` → `CM_MLPERF_LOADGEN_MULTISTREAM_TARGET_LATENCY=value` +* `--offline_target_qps=value` → `CM_MLPERF_LOADGEN_OFFLINE_TARGET_QPS=value` +* `--output_dir=value` → `CM_MLPERF_OUTPUT_DIR=value` +* `--performance_sample_count=value` → `CM_MLPERF_LOADGEN_PERFORMANCE_SAMPLE_COUNT=value` +* `--rerun=value` → `CM_RERUN=value` +* `--results_repo=value` → `CM_MLPERF_INFERENCE_RESULTS_REPO=value` +* `--scenario=value` → `CM_MLPERF_LOADGEN_SCENARIO=value` +* `--server_target_qps=value` → `CM_MLPERF_LOADGEN_SERVER_TARGET_QPS=value` +* `--singlestream_target_latency=value` → `CM_MLPERF_LOADGEN_SINGLESTREAM_TARGET_LATENCY=value` +* `--skip_preprocess=value` → `CM_SKIP_PREPROCESS_DATASET=value` +* `--skip_preprocessing=value` → `CM_SKIP_PREPROCESS_DATASET=value` +* `--target_latency=value` → `CM_MLPERF_LOADGEN_TARGET_LATENCY=value` +* `--target_qps=value` → `CM_MLPERF_LOADGEN_TARGET_QPS=value` +* `--user_conf=value` → `CM_MLPERF_USER_CONF=value` + +**Above CLI flags can be used in the Python CM API as follows:** + +```python +r=cm.access({... , "count":...} +``` + +
+ +#### Default environment + +
+Click here to expand this section. + +These keys can be updated via `--env.KEY=VALUE` or `env` dictionary in `@input.json` or using script flags. + +* CM_MLPERF_LOADGEN_SCENARIO: `Offline` +* CM_MLPERF_LOADGEN_MODE: `performance` +* CM_SKIP_PREPROCESS_DATASET: `no` +* CM_SKIP_MODEL_DOWNLOAD: `no` +* CM_MLPERF_SUT_NAME_IMPLEMENTATION_PREFIX: `redhat_openshift` +* CM_MLPERF_SKIP_RUN: `no` + +
+ +___ +### Script workflow, dependencies and native scripts + +
+Click here to expand this section. + + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/_cm.yaml)*** + * detect,os + - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) + * detect,cpu + - CM script: [detect-cpu](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-cpu) + * get,sys-utils-cm + - CM script: [get-sys-utils-cm](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-sys-utils-cm) + * get,mlcommons,inference,src + * CM names: `--adr.['inference-src']...` + - CM script: [get-mlperf-inference-src](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-src) + * get,mlcommons,inference,loadgen + * CM names: `--adr.['inference-loadgen']...` + - CM script: [get-mlperf-inference-loadgen](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-loadgen) + * generate,user-conf,mlperf,inference + * CM names: `--adr.['user-conf-generator']...` + - CM script: [generate-mlperf-inference-user-conf](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/generate-mlperf-inference-user-conf) + * get,generic-python-lib,_mlperf_logging + * CM names: `--adr.['mlperf-logging']...` + - CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib) + * get,git,repo + * CM names: `--adr.inference-results...` + - CM script: [get-git-repo](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-git-repo) + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/customize.py)*** + 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/_cm.yaml) + 1. ***Run native script if exists*** + * [run.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/run.sh) + 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/_cm.yaml) + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-dummy/_cm.yaml)*** + * benchmark-mlperf + * `if (CM_MLPERF_SKIP_RUN not in ['yes', True])` + * CM names: `--adr.['runner', 'mlperf-runner']...` + - CM script: [benchmark-program-mlperf](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/benchmark-program-mlperf) + * save,mlperf,inference,state + * CM names: `--adr.['save-mlperf-inference-state']...` + - CM script: [save-mlperf-inference-implementation-state](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/save-mlperf-inference-implementation-state) +
+ +___ +### Script output +`cmr "reproduce mlcommons mlperf inference harness redhat-harness redhat openshift-harness openshift[,variations]" [--input_flags] -j` +#### New environment keys (filter) + +* `CM_DATASET_*` +* `CM_HW_NAME` +* `CM_IMAGENET_ACCURACY_DTYPE` +* `CM_MAX_EXAMPLES` +* `CM_MLPERF_*` +* `CM_ML_MODEL_*` +* `CM_SQUAD_ACCURACY_DTYPE` +#### New environment keys auto-detected from customize + +___ +### Maintainers + +* [Open MLCommons taskforce on automation and reproducibility](https://github.com/mlcommons/ck/blob/master/docs/taskforce.md) \ No newline at end of file diff --git a/cm-mlops/script/app-mlperf-inference-intel/README.md b/cm-mlops/script/app-mlperf-inference-intel/README.md index d1753b43ad..dff64a4352 100644 --- a/cm-mlops/script/app-mlperf-inference-intel/README.md +++ b/cm-mlops/script/app-mlperf-inference-intel/README.md @@ -30,7 +30,7 @@ * Category: *Modular MLPerf benchmarks.* * CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* -* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel)* * CM meta description for this script: *[_cm.yaml](_cm.yaml)* * CM "database" tags to find this script: *reproduce,mlcommons,mlperf,inference,harness,intel-harness,intel,intel-harness,intel* * Output cached? *False* @@ -401,7 +401,7 @@ ___ 1. ***Read "deps" on other CM scripts*** * reproduce,mlperf,inference,intel,harness,_build-harness * CM names: `--adr.['build-harness']...` - - CM script: [reproduce-mlperf-inference-intel](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel) + - CM script: [app-mlperf-inference-intel](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel) * get,mlcommons,inference,src * CM names: `--adr.['inference-src']...` - CM script: [get-mlperf-inference-src](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-src) @@ -473,7 +473,7 @@ ___
Click here to expand this section. - 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/_cm.yaml)*** + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/_cm.yaml)*** * detect,os - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) * detect,cpu @@ -506,14 +506,14 @@ ___ * get,mlperf,inference,results,_ctuning * CM names: `--adr.inference-results...` - CM script: [get-mlperf-inference-results](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-results) - 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/customize.py)*** - 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/_cm.yaml) + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/customize.py)*** + 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/_cm.yaml) 1. ***Run native script if exists*** - * [run_bert_harness.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/run_bert_harness.sh) - * [run_gptj_harness.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/run_gptj_harness.sh) - 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/_cm.yaml) - 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/customize.py)*** - 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel/_cm.yaml)*** + * [run_bert_harness.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/run_bert_harness.sh) + * [run_gptj_harness.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/run_gptj_harness.sh) + 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/_cm.yaml) + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel/_cm.yaml)*** * benchmark-mlperf * `if (CM_LOCAL_MLPERF_INFERENCE_INTEL_RUN_MODE == run_harness) AND (CM_MLPERF_SKIP_RUN not in ['yes', True])` * CM names: `--adr.['runner', 'mlperf-runner']...` diff --git a/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/README.md b/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/README.md index 9694a2d7bb..e53d8c5b4c 100644 --- a/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/README.md +++ b/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/README.md @@ -33,7 +33,7 @@ See extra [notes](README-extra.md) from the authors and contributors. * Category: *Modular MLPerf inference benchmark pipeline.* * CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* -* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp)* * CM meta description for this script: *[_cm.yaml](_cm.yaml)* * CM "database" tags to find this script: *app,mlcommons,mlperf,inference,cpp* * Output cached? *False* @@ -266,7 +266,7 @@ ___
Click here to expand this section. - 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml)*** + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/_cm.yaml)*** * detect,os - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) * detect,cpu @@ -305,12 +305,12 @@ ___ * generate,user-conf,mlperf,inference * CM names: `--adr.['user-conf-generator']...` - CM script: [generate-mlperf-inference-user-conf](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/generate-mlperf-inference-user-conf) - 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp/customize.py)*** - 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml) + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/customize.py)*** + 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/_cm.yaml) 1. ***Run native script if exists*** - 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml) - 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp/customize.py)*** - 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml)*** + 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/_cm.yaml) + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp/_cm.yaml)*** * compile,cpp-program * `if (CM_MLPERF_SKIP_RUN != yes)` * CM names: `--adr.['compile-program']...` diff --git a/cm-mlops/script/app-mlperf-inference-mlcommons-python/README.md b/cm-mlops/script/app-mlperf-inference-mlcommons-python/README.md index 83120573ea..9e14eec57d 100644 --- a/cm-mlops/script/app-mlperf-inference-mlcommons-python/README.md +++ b/cm-mlops/script/app-mlperf-inference-mlcommons-python/README.md @@ -41,7 +41,7 @@ See extra [notes](README-extra.md) from the authors and contributors. * Category: *Modular MLPerf inference benchmark pipeline.* * CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* -* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python)* * CM meta description for this script: *[_cm.yaml](_cm.yaml)* * CM "database" tags to find this script: *app,vision,language,mlcommons,mlperf,inference,reference,ref* * Output cached? *False* @@ -653,7 +653,7 @@ ___
Click here to expand this section. - 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml)*** + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python/_cm.yaml)*** * detect,os - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) * detect,cpu @@ -831,20 +831,20 @@ ___ - CM script: [get-mlperf-inference-src](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-mlperf-inference-src) * get,generic-python-lib,_package.psutil - CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib) - 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference/customize.py)*** - 1. ***Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml)*** + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python/customize.py)*** + 1. ***Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python/_cm.yaml)*** * remote,run,cmds * `if (CM_ASSH_RUN_COMMANDS == on)` * CM names: `--adr.['remote-run-cmds']...` - CM script: [remote-run-commands](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/remote-run-commands) 1. ***Run native script if exists*** - 1. ***Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml)*** + 1. ***Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python/_cm.yaml)*** * benchmark-mlperf * `if (CM_MLPERF_SKIP_RUN != on)` * CM names: `--adr.['mlperf-runner']...` - CM script: [benchmark-program-mlperf](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/benchmark-program-mlperf) - 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference/customize.py)*** - 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml)*** + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python/_cm.yaml)*** * save,mlperf,inference,state * CM names: `--adr.['save-mlperf-inference-state']...` - CM script: [save-mlperf-inference-implementation-state](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/save-mlperf-inference-implementation-state) diff --git a/cm-mlops/script/app-mlperf-inference-nvidia/README.md b/cm-mlops/script/app-mlperf-inference-nvidia/README.md index e0227b4a37..3ef1429b68 100644 --- a/cm-mlops/script/app-mlperf-inference-nvidia/README.md +++ b/cm-mlops/script/app-mlperf-inference-nvidia/README.md @@ -168,7 +168,7 @@ Assuming all the downloaded files are to the user home directory please do the f * Category: *Reproduce MLPerf benchmarks.* * CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* -* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia)* * CM meta description for this script: *[_cm.yaml](_cm.yaml)* * CM "database" tags to find this script: *reproduce,mlcommons,mlperf,inference,harness,nvidia-harness,nvidia* * Output cached? *False* @@ -1004,13 +1004,13 @@ ___ - CM script: [build-mlperf-inference-server-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/build-mlperf-inference-server-nvidia) * reproduce,mlperf,inference,nvidia,harness,_preprocess_data * `if (CM_MODEL not in ['dlrm-v2-99', 'dlrm-v2-99.9'])` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * reproduce,mlperf,inference,nvidia,harness,_download_model * `if (CM_MODEL not in ['retinanet_old', 'resnet50', 'bert-99', 'bert-99.9', 'dlrm-v2-99', 'dlrm-v2-99.9'])` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * reproduce,mlperf,inference,nvidia,harness,_calibrate * `if (CM_MODEL == retinanet)` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * `_calibrate` - Environment variables: - *MLPERF_NVIDIA_RUN_COMMAND*: `calibrate` @@ -1019,7 +1019,7 @@ ___ 1. ***Read "deps" on other CM scripts*** * reproduce,mlperf,inference,nvidia,harness,_download_model * `if (CM_MODEL not in ['retinanet_old', 'resnet50', 'bert-99', 'bert-99.9', 'dlrm-v2-99', 'dlrm-v2-99.9'])` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * `_download_model` - Environment variables: - *MLPERF_NVIDIA_RUN_COMMAND*: `download_model` @@ -1057,13 +1057,13 @@ ___ - CM script: [build-mlperf-inference-server-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/build-mlperf-inference-server-nvidia) * reproduce,mlperf,inference,nvidia,harness,_build_engine * CM names: `--adr.['build-engine']...` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * reproduce,mlperf,inference,nvidia,harness,_preprocess_data * `if (CM_MODEL not in ['dlrm-v2-99', 'dlrm-v2-99.9'])` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * reproduce,mlperf,inference,nvidia,harness,_download_model * `if (CM_MODEL not in ['retinanet', 'resnet50', 'bert-99', 'bert-99.9', 'dlrm-v2-99', 'dlrm-v2-99.9'])` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia)
@@ -1173,7 +1173,7 @@ ___
Click here to expand this section. - 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/_cm.yaml)*** + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/_cm.yaml)*** * detect,os - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) * detect,cpu @@ -1246,17 +1246,17 @@ ___ - CM script: [generate-mlperf-inference-user-conf](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/generate-mlperf-inference-user-conf) * get,generic-python-lib,_package.nvmitten,_path./opt/nvmitten-0.1.3-cp38-cp38-linux_x86_64.whl - CM script: [get-generic-python-lib](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-generic-python-lib) - 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/customize.py)*** - 1. ***Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/_cm.yaml)*** + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/customize.py)*** + 1. ***Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/_cm.yaml)*** * get,ml-model,gptj,_pytorch,_rclone * `if (CM_REQUIRE_GPTJ_MODEL_DOWNLOAD == yes AND CM_MLPERF_NVIDIA_HARNESS_RUN_MODE in ['download_model', 'preprocess_data'])` * CM names: `--adr.['gptj-model']...` - CM script: [get-ml-model-gptj](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-ml-model-gptj) 1. ***Run native script if exists*** - * [run.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/run.sh) - 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/_cm.yaml) - 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/customize.py)*** - 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia/_cm.yaml)*** + * [run.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/run.sh) + 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/_cm.yaml) + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia/_cm.yaml)*** * benchmark-mlperf * `if (CM_CALL_MLPERF_RUNNER == True) AND (CM_MLPERF_SKIP_RUN not in ['yes', True])` * CM names: `--adr.['runner', 'mlperf-runner']...` diff --git a/cm-mlops/script/app-mlperf-inference-qualcomm/README.md b/cm-mlops/script/app-mlperf-inference-qualcomm/README.md index a4d08748c1..feea48f5e3 100644 --- a/cm-mlops/script/app-mlperf-inference-qualcomm/README.md +++ b/cm-mlops/script/app-mlperf-inference-qualcomm/README.md @@ -30,7 +30,7 @@ * Category: *Modular MLPerf benchmarks.* * CM GitHub repository: *[mlcommons@ck](https://github.com/mlcommons/ck/tree/master/cm-mlops)* -* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm)* +* GitHub directory for this script: *[GitHub](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm)* * CM meta description for this script: *[_cm.yaml](_cm.yaml)* * CM "database" tags to find this script: *reproduce,mlcommons,mlperf,inference,harness,qualcomm-harness,qualcomm,kilt-harness,kilt* * Output cached? *False* @@ -645,7 +645,7 @@ ___
Click here to expand this section. - 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/_cm.yaml)*** + 1. ***Read "deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/_cm.yaml)*** * detect,os - CM script: [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os) * detect,cpu @@ -701,13 +701,13 @@ ___ * get,lib,onnxruntime,lang-cpp,_cuda * `if (CM_MLPERF_BACKEND == onnxruntime AND CM_MLPERF_DEVICE == gpu)` - CM script: [get-onnxruntime-prebuilt](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/get-onnxruntime-prebuilt) - 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/customize.py)*** - 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/_cm.yaml) + 1. ***Run "preprocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/customize.py)*** + 1. Read "prehook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/_cm.yaml) 1. ***Run native script if exists*** - * [run.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/run.sh) - 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/_cm.yaml) - 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/customize.py)*** - 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm/_cm.yaml)*** + * [run.sh](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/run.sh) + 1. Read "posthook_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/_cm.yaml) + 1. ***Run "postrocess" function from [customize.py](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/customize.py)*** + 1. ***Read "post_deps" on other CM scripts from [meta](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm/_cm.yaml)*** * compile,cpp-program * `if (CM_MLPERF_SKIP_RUN != True)` * CM names: `--adr.['compile-program']...` diff --git a/cm-mlops/script/app-mlperf-inference/README.md b/cm-mlops/script/app-mlperf-inference/README.md index 715029e849..17b15b0d3e 100644 --- a/cm-mlops/script/app-mlperf-inference/README.md +++ b/cm-mlops/script/app-mlperf-inference/README.md @@ -132,10 +132,10 @@ ___ Click here to expand this section. * `_cpp` - - Aliases: `_mil` + - Aliases: `_mil,_mlcommons-cpp` - Environment variables: - *CM_MLPERF_CPP*: `yes` - - *CM_MLPERF_IMPLEMENTATION*: `cpp` + - *CM_MLPERF_IMPLEMENTATION*: `mlcommons_cpp` - *CM_IMAGENET_ACCURACY_DTYPE*: `float32` - *CM_OPENIMAGES_ACCURACY_DTYPE*: `float32` - Workflow: @@ -143,27 +143,31 @@ ___ * app,mlperf,cpp,inference * `if (CM_SKIP_RUN != True)` * CM names: `--adr.['cpp-mlperf-inference', 'mlperf-inference-implementation']...` - - CM script: [app-mlperf-inference-cpp](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-cpp) + - CM script: [app-mlperf-inference-mlcommons-cpp](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-cpp) * `_intel-original` - Aliases: `_intel` + - Environment variables: + - *CM_MLPERF_IMPLEMENTATION*: `intel` - Workflow: 1. ***Read "prehook_deps" on other CM scripts*** * reproduce,mlperf,inference,intel * `if (CM_SKIP_RUN != True)` * CM names: `--adr.['intel', 'intel-harness', 'mlperf-inference-implementation']...` - - CM script: [reproduce-mlperf-inference-intel](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-intel) + - CM script: [app-mlperf-inference-intel](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-intel) * `_kilt` - Aliases: `_qualcomm` + - Environment variables: + - *CM_MLPERF_IMPLEMENTATION*: `qualcomm` - Workflow: 1. ***Read "prehook_deps" on other CM scripts*** * reproduce,mlperf,inference,kilt * `if (CM_SKIP_RUN != True)` * CM names: `--adr.['kilt', 'kilt-harness', 'mlperf-inference-implementation']...` - - CM script: [reproduce-mlperf-inference-qualcomm](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-qualcomm) + - CM script: [app-mlperf-inference-qualcomm](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-qualcomm) * `_nvidia-original` - Aliases: `_nvidia` - Environment variables: - - *CM_MLPERF_IMPLEMENTATION*: `nvidia-original` + - *CM_MLPERF_IMPLEMENTATION*: `nvidia` - *CM_SQUAD_ACCURACY_DTYPE*: `float16` - *CM_IMAGENET_ACCURACY_DTYPE*: `int32` - *CM_CNNDM_ACCURACY_DTYPE*: `int32` @@ -177,12 +181,12 @@ ___ * reproduce,mlperf,nvidia,inference * `if (CM_SKIP_RUN != True)` * CM names: `--adr.['nvidia-original-mlperf-inference', 'nvidia-harness', 'mlperf-inference-implementation']...` - - CM script: [reproduce-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/reproduce-mlperf-inference-nvidia) + - CM script: [app-mlperf-inference-nvidia](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-nvidia) * **`_reference`** (default) - - Aliases: `_python` + - Aliases: `_mlcommons-python,_python` - Environment variables: - *CM_MLPERF_PYTHON*: `yes` - - *CM_MLPERF_IMPLEMENTATION*: `reference` + - *CM_MLPERF_IMPLEMENTATION*: `mlcommons_python` - *CM_SQUAD_ACCURACY_DTYPE*: `float32` - *CM_IMAGENET_ACCURACY_DTYPE*: `float32` - *CM_OPENIMAGES_ACCURACY_DTYPE*: `float32` @@ -192,19 +196,20 @@ ___ * app,mlperf,reference,inference * `if (CM_SKIP_RUN != True)` * CM names: `--adr.['python-reference-mlperf-inference', 'mlperf-inference-implementation']...` - - CM script: [app-mlperf-inference-reference](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-reference) + - CM script: [app-mlperf-inference-mlcommons-python](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-mlcommons-python) * `_tflite-cpp` + - Aliases: `_ctuning-cpp-tflite` - Environment variables: - *CM_MLPERF_TFLITE_CPP*: `yes` - *CM_MLPERF_CPP*: `yes` - - *CM_MLPERF_IMPLEMENTATION*: `tflite-cpp` + - *CM_MLPERF_IMPLEMENTATION*: `ctuning_cpp_tflite` - *CM_IMAGENET_ACCURACY_DTYPE*: `float32` - Workflow: 1. ***Read "prehook_deps" on other CM scripts*** * app,mlperf,tflite-cpp,inference * `if (CM_SKIP_RUN != True)` * CM names: `--adr.['tflite-cpp-mlperf-inference', 'mlperf-inference-implementation']...` - - CM script: [app-mlperf-inference-tflite-cpp](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-tflite-cpp) + - CM script: [app-mlperf-inference-ctuning-cpp-tflite](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/app-mlperf-inference-ctuning-cpp-tflite)
@@ -388,7 +393,7 @@ ___ - Workflow: 1. ***Read "posthook_deps" on other CM scripts*** * run,accuracy,mlperf,_librispeech - * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia-original)` + * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia)` * CM names: `--adr.['mlperf-accuracy-script', 'librispeech-accuracy-script']...` - CM script: [process-mlperf-accuracy](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/process-mlperf-accuracy) * `_sdxl` @@ -397,7 +402,7 @@ ___ - Workflow: 1. ***Read "posthook_deps" on other CM scripts*** * run,accuracy,mlperf,_coco2014 - * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia-original)` + * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia)` * CM names: `--adr.['mlperf-accuracy-script', 'coco2014-accuracy-script']...` - CM script: [process-mlperf-accuracy](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/process-mlperf-accuracy) @@ -494,7 +499,7 @@ ___ - Workflow: 1. ***Read "posthook_deps" on other CM scripts*** * run,accuracy,mlperf,_kits19,_int8 - * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia-original)` + * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia)` * CM names: `--adr.['mlperf-accuracy-script', '3d-unet-accuracy-script']...` - CM script: [process-mlperf-accuracy](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/process-mlperf-accuracy) * `_bert_` @@ -525,7 +530,7 @@ ___ - Workflow: 1. ***Read "posthook_deps" on other CM scripts*** * run,accuracy,mlperf,_cnndm - * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != intel-original)` + * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != intel)` * CM names: `--adr.['cnndm-accuracy-script', 'mlperf-accuracy-script']...` - CM script: [process-mlperf-accuracy](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/process-mlperf-accuracy) * `_llama2-70b_` @@ -534,7 +539,7 @@ ___ - Workflow: 1. ***Read "posthook_deps" on other CM scripts*** * run,accuracy,mlperf,_open-orca,_int32 - * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia-original)` + * `if (CM_MLPERF_LOADGEN_MODE in ['accuracy', 'all'] AND CM_MLPERF_ACCURACY_RESULTS_DIR == on) AND (CM_MLPERF_IMPLEMENTATION != nvidia)` * CM names: `--adr.['mlperf-accuracy-script', 'open-orca-accuracy-script']...` - CM script: [process-mlperf-accuracy](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/process-mlperf-accuracy) * `_reference,bert_` diff --git a/cm-mlops/script/run-mlperf-inference-app/README.md b/cm-mlops/script/run-mlperf-inference-app/README.md index 98b2d59e43..44dec18233 100644 --- a/cm-mlops/script/run-mlperf-inference-app/README.md +++ b/cm-mlops/script/run-mlperf-inference-app/README.md @@ -239,7 +239,7 @@ ___ * --**device** MLPerf device {cpu,cuda,rocm,qaic} (*cpu*) * --**model** MLPerf model {resnet50,retinanet,bert-99,bert-99.9,3d-unet-99,3d-unet-99.9,rnnt,dlrm-v2-99,dlrm-v2-99.9,gptj-99,gptj-99.9,sdxl,llama2-70b-99,llama2-70b-99.9,mobilenet,efficientnet} (*resnet50*) * --**precision** MLPerf model precision {float32,float16,bfloat16,int8,uint8} -* --**implementation** MLPerf implementation {reference,mil,nvidia-original,intel-original,qualcomm,tflite-cpp} (*reference*) +* --**implementation** MLPerf implementation {mlcommons-python,mlcommons-cpp,nvidia,intel,qualcomm,ctuning-cpp-tflite} (*reference*) * --**backend** MLPerf framework (backend) {onnxruntime,tf,pytorch,deepsparse,tensorrt,glow,tvm-onnx} (*onnxruntime*) * --**scenario** MLPerf scenario {Offline,Server,SingleStream,MultiStream} (*Offline*) * --**mode** MLPerf benchmark mode {,accuracy,performance}