Skip to content

Latest commit

 

History

History
17 lines (8 loc) · 902 Bytes

File metadata and controls

17 lines (8 loc) · 902 Bytes

This repository is for our paper "Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression" link

A method for distributing blockwise compression of models accross many workers using Tensorflow and MPI

Overview Image

many of the notebooks depend on the pretrained vgg16 and resnet50 fine-tuned on upscaled cifar10. For size reasons .h5 files are not tracked on this repo. If cloning you should download the .h5 files from google drive at the following links

bash scripts are contained in is repo to build the docker image, start the docker container and start the jupyterlab instance needed for this project.

bash scripts need to be run with sudo permissions.