CNN Quantization research.
keras==2.1.0
tensorflow==1.8.0
opencv==3.2.0
pycocotools, BeautifulSoup4, lxml, tqdm, h5py
Simply map |max| to 127. Reference from TensorRT.
Tensor Values = FP32 scale factor * int8 array.
Here is an example for convolution layer.
Quantization detail can be found in layer.py
In this repository, all weight files are trained with Keras which are stored as HDF5 format. I parse these weight files with h5py then import them into TensorFlow models.
For example, import pre-trained weights into a convolution layer (first convolution layer of VGG16) built with tensorflow as follow
I have built VGG16, ResNet50, InceptionV3, Xception, MobileNet, Squeezenet. These models are tested successfully. For detail see models directory
An example for testing resnet50.
python eval_image_classification.py --model='resnet'
An example for testing mobilenet with a width multiplier 1.0.
python eval_image_classification.py --model='mobilenet' --alpha=1.0
ImageNet val data provided by aaron-xichen, sincerely thanks to aaron-xichen for sharing this processed ImageNet val data.
Notice: MobileNets suffer significant accuracy loss.
float32 | quantized(int8) | diff | ||||
Model | Top1 acc | Top5 acc | Top1 acc | Top5 acc | Top1 acc | Top5 acc |
VGG16 | 0.70786 | 0.89794 | 0.7066 | 0.89714 | -0.00126 | -0.0008 |
ResNet50 | 0.74366 | 0.91806 | 0.74004 | 0.91574 | -0.00362 | -0.00232 |
Inceptionv3 | 0.76518 | 0.92854 | 0.75982 | 0.92658 | -0.00536 | -0.00196 |
Xception | 0.77446 | 0.93618 | 0.7672 | 0.93204 | -0.00726 | -0.00414 |
Squeezenet | 0.52294 | 0.76312 | 0.519 | 0.76032 | -0.00394 | -0.0028 |
MobileNet-1-0 | 0.69856 | 0.89174 | 0.64294 | 0.85656 | -0.05562 | -0.03518 |
MobileNet-7-5 | 0.67726 | 0.87838 | 0.6367 | 0.84952 | -0.04056 | -0.02886 |
MobileNet-5-0 | 0.6352 | 0.85006 | 0.5723 | 0.80522 | -0.0629 | -0.04484 |
MobileNet-2-5 | 0.5134 | 0.75546 | 0.34848 | 0.58956 | -0.16492 | -0.1659 |
Only quantize pointwise convolution in MobileNet
float32 | quantized(int8) | diff | ||||
Model | Top1 acc | Top5 acc | Top1 acc | Top5 acc | Top1 acc | Top5 acc |
MobileNet-1-0 | 0.69856 | 0.89174 | 0.65254 | 0.86164 | -0.04602 | -0.0301 |
MobileNet-7-5 | 0.67726 | 0.87838 | 0.64654 | 0.85646 | -0.03072 | -0.02192 |
MobileNet-5-0 | 0.6352 | 0.85006 | 0.59438 | 0.8217 | -0.04082 | -0.02836 |
MobileNet-2-5 | 0.5134 | 0.75546 | 0.46506 | 0.71176 | -0.04834 | -0.0437 |
Firstly, download VOC2007 test set and COCO2017 val set, COCO2017 val set annotations datasets, then extract them and modify the path in script.
Secondly, download SSD pre-trained weights and put them in 'weights' directory.
SSD300 VOC weights, SSD300 COCO weights, SSD512 VOC weights, SSD512 COCO weights
An example for evaluating SSD300 on VOC2007 test set
python eval_object_detection.py --model='ssd300' --eval-dataset='voc2007'
SSD results on VOC2007 test set
mAP | |||
Model | float32 | quantized(int8) | diff |
SSD300 | 0.782 | 0.783 | 0.001 |
SSD512 | 0.91 | 0.909 | -0.001 |
The AP of each category can be found in this doc
SSD and YOLOv3 results on COCO val2017.
mAP | |||
Model | float32 | quantized(int8) | diff |
SSD300 | 0.424 | 0.423 | -0.001 |
SSD512 | 0.481 | 0.478 | -0.003 |
YOLO320 | to do | to do | to do |
YOLO416 | to do | to do | to do |
YOLO608 | to do | to do | to do |
In this part, I evaluate semantic segmentation with U-net.
HumanParsing-Dataset is adopted in this test.
The tested models are trained by my-self. Training details can be found in this repo: Person-Segmentation-Keras.
For person segmentation (binary classification) task.
python eval_segmentation.py --model='unet' --nClasses=2
For human parsing (multi-class classification) task.
python eval_segmentation.py --model='unet' --nClasses=5
Person segmentation
mIoU | |||
Model | float32 | quantized(int8) | diff |
Unet | 0.8920 | 0.8868 | -0.0052 |
Human parsing
mIoU | ||||
Part | float32 | quantized(int8) | diff | |
Unet | head | 0.66476 | 0.66409 | -0.00067 |
upper body | 0.48639 | 0.48618 | 0.00021 | |
both hands | 0.27016 | 0.26903 | -0.00113 | |
lower body | 0.66536 | 0.66497 | -0.00039 | |
mean | 0.52167 | 0.52107 | -0.0006 |
PointNet