Skip to content

Latest commit

 

History

History
 
 

Step-by-Step

This example load a language translation model and confirm its accuracy and speed based on SQuAD task.

Prerequisite

1. Environment

pip install neural-compressor
pip install -r requirements.txt

Note: Validated ONNX Runtime Version.

2. Prepare Model

Supported model identifier from huggingface.co:

Model Identifier
mrm8488/spanbert-finetuned-squadv1
salti/bert-base-multilingual-cased-finetuned-squad
distilbert-base-uncased-distilled-squad
bert-large-uncased-whole-word-masking-finetuned-squad
deepset/roberta-large-squad2
python prepare_model.py --input_model=mrm8488/spanbert-finetuned-squadv1 --output_model=spanbert-finetuned-squadv1.onnx # or other supported model identifier

3. Prepare Dataset

Download SQuAD dataset from SQuAD dataset link.

Run

1. Quantization

Dynamic quantization:

bash run_quant.sh --input_model=/path/to/model \ # model path as *.onnx
                   --output_model=/path/to/model_tune 

2. Benchmark

bash run_benchmark.sh --input_model=/path/to/model \ # model path as *.onnx
                      --batch_size=batch_size \
                      --mode=performance # or accuracy