Skip to content

Latest commit

 

History

History
109 lines (89 loc) · 3.97 KB

File metadata and controls

109 lines (89 loc) · 3.97 KB

BERT Large BFloat16 inference

Description

This document has instructions for running BERT Large BFloat16 inference using Intel-optimized TensorFlow.

Datasets

BERT Large Data

Download and unzip the BERT Large uncased (whole word masking) model from the google bert repo. Then, download the Stanford Question Answering Dataset (SQuAD) dataset file dev-v1.1.json into the wwm_uncased_L-24_H-1024_A-16 directory that was just unzipped.

wget https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip
unzip wwm_uncased_L-24_H-1024_A-16.zip

wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -P wwm_uncased_L-24_H-1024_A-16

Set the DATASET_DIR to point to that directory when running BERT Large inference using the SQuAD data.

Quick Start Scripts

Script name Description
bfloat16_benchmark.sh This script runs bert large bfloat16 inference.
bfloat16_profile.sh This script runs bfloat16 inference in profile mode.
bfloat16_accuracy.sh This script is runs bert large bfloat16 inference in accuracy mode.

Run the model

Setup your environment using the instructions below, depending on if you are using AI Kit:

Setup using AI Kit Setup without AI Kit

To run using AI Kit you will need:

  • numactl
  • unzip
  • wget
  • Activate the `tensorflow` conda environment
    conda activate tensorflow

To run without AI Kit you will need:

  • Python 3
  • intel-tensorflow>=2.5.0
  • git
  • numactl
  • unzip
  • wget
  • A clone of the Model Zoo repo
    git clone https://github.com/IntelAI/models.git

After your setup is done, download and unzip the pretrained model. The path to this directory should be set as the CHECKPOINT_DIR before running quickstart scripts.

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/bert_large_checkpoints.zip
unzip bert_large_checkpoints.zip
export CHECKPOINT_DIR=$(pwd)/bert_large_checkpoints

Next, set environment variables with paths to the dataset, checkpoint files, and an output directory, then run a quickstart script. See the list of quickstart scripts for details on the different options.

The snippet below shows how to run a quickstart script:

# cd to your model zoo directory
cd models

export DATASET_DIR=<path to the dataset being used>
export CHECKPOINT_DIR=<path to the unzipped checkpoints>
export OUTPUT_DIR=<directory where log files will be saved>

# Run a script for your desired usage
./quickstart/language_modeling/tensorflow/bert_large/inference/cpu/bfloat16/<script name>.sh

Additional Resources