Implement the Guided-ReLU visualization used in the paper:
And the class activation mapping (CAM) visualization proposed in the paper:
saliency-maps.py
takes an image, and produce its saliency map by running a ResNet-50 and backprop its maximum
activations back to the input image space.
Similar techinques can be used to visualize the concept learned by each filter in the network.
Usage:
wget http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz
tar -xzvf resnet_v1_50_2016_08_28.tar.gz
./saliency-maps.py cat.jpg
Left to right:
- the original cat image
- the magnitude in the saliency map
- the magnitude blended with the original image
- positive correlated pixels (keep original color)
- negative correlated pixels (keep original color)
CAM-resnet.py
fine-tune a variant of ResNet to have 2x larger last-layer feature maps, then produce CAM visualizations.
Usage:
- Fine tune or retrain the ResNet:
./CAM-resnet.py --data /path/to/imagenet [--load ImageNet-ResNet18.npy] [--gpu 0,1,2,3]
Pretrained and fine-tuned ResNet can be downloaded here and here.
- Generate CAM on ImageNet validation set:
./CAM-resnet.py --data /path/to/imagenet --load ImageNet-ResNet18-2xGAP.npy --cam