Skip to content

Latest commit

 

History

History
129 lines (107 loc) · 5.36 KB

File metadata and controls

129 lines (107 loc) · 5.36 KB

MobileNet V1 Int8 inference

Description

This document has instructions for running MobileNet V1 Int8 inference using Intel-optimized TensorFlow.

Datasets

This step is required only for running accuracy, for running benchmark we do not need to provide dataset.

Download and preprocess the ImageNet dataset using the instructions here. After running the conversion script you should have a directory with the ImageNet dataset in the TF records format.

Set the DATASET_DIR to point to this directory when running MobileNet V1.

Quick Start Scripts

Script name Description
int8_online_inference.sh Runs online inference (batch_size=1).
int8_batch_inference.sh Runs batch inference (batch_size=240).
int8_accuracy.sh Measures the model accuracy (batch_size=100).
multi_instance_batch_inference.sh A multi-instance run that uses all the cores for each socket for each instance with a batch size of 56. Uses synthetic data if no DATASET_DIR is set.
multi_instance_online_inference.sh A multi-instance run that uses 4 cores per instance with a batch size of 1. Uses synthetic data if no DATASET_DIR is set.

Run the model

Setup your environment using the instructions below, depending on if you are using AI Kit:

Setup using AI Kit on Linux Setup without AI Kit on Linux Setup without AI Kit on Windows

To run using AI Kit on Linux you will need:

  • numactl
  • wget
  • Activate the tensorflow conda environment
    conda activate tensorflow

To run without AI Kit on Linux you will need:

  • Python 3
  • intel-tensorflow>=2.5.0
  • git
  • numactl
  • wget
  • A clone of the Model Zoo repo
    git clone https://github.com/IntelAI/models.git

To run without AI Kit on Windows you will need:

After finishing the setup above, download the pretrained model and set the PRETRAINED_MODEL environment var to the path to the frozen graph. If you run on Windows, please use a browser to download the pretrained model using the link below. For Linux, run:

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/mobilenetv1_int8_pretrained_model.pb
export PRETRAINED_MODEL=$(pwd)/mobilenetv1_int8_pretrained_model.pb

Set environment variables for the path to your DATASET_DIR for ImageNet and an OUTPUT_DIR where log files will be written. Navigate to your model zoo directory and then run a quickstart script on either Linux or Windows.

Run on Linux:

# cd to your model zoo directory
cd models

export PRETRAINED_MODEL=<path to the frozen graph downloaded above>
export DATASET_DIR=<path to the ImageNet TF records>
export OUTPUT_DIR=<directory where log files will be written>

./quickstart/image_recognition/tensorflow/mobilenet_v1/inference/cpu/int8/<script name>.sh

Run on Windows

Using cmd.exe run:

# cd to your model zoo directory
cd models

set PRETRAINED_MODEL=<path to the frozen graph downloaded above>
set DATASET_DIR=<path to the ImageNet TF records>
set OUTPUT_DIR=<directory where log files will be written>

bash quickstart\image_recognition\tensorflow\mobilenet_v1\inference\cpu\int8\<script name>.sh

Note: You may use cygpath to convert the Windows paths to Unix paths before setting the environment variables. As an example, if the dataset location on Windows is D:\user\ImageNet, convert the Windows path to Unix as shown:

cygpath D:\user\ImageNet
/d/user/ImageNet

Then, set the DATASET_DIR environment variable set DATASET_DIR=/d/user/ImageNet.

Additional Resources