Official code for "Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks" (N. Garau, N. Bisagno, Z. Sambugaro, N. Conci (Accepted for CVPR'22) [pdf]
From the main directory run:
pipenv install
to install all the required dependencies.
The code comes with separate configuration files for each dataset, with multiple flags to run training, validation and testing.
As an example, to run contrastive pre-training on CIFAR-10 on a single GPU, execute:
CUDA_VISIBLE_DEVICES=0 python src/main.py --flagfile config/config_CIFAR10.cfg
After running the pre-training you can run the training phase with:
CUDA_VISIBLE_DEVICES=0 python src/main.py --flagfile config/config_CIFAR10.cfg --resume_training --supervise --load_checkpoint_dir <path_to_checkpoint.ckpt>
To run testing or to freeze the network weights, set the 'mode' flag (e.g. --mode test
or --mode freeze
).
Refer to this page for additional info about each flag.
We provide pre-trained models that can be used to plot islands of agreement or fine-tune for image classification. To fine-tune a pretrained model, just run:
CUDA_VISIBLE_DEVICES=0 python src/main.py --flagfile config/config_CIFAR10.cfg --patch_size 1 --patch_dim 128 --resume_training --supervise --load_checkpoint_dir path_to_pretrained_model.ckpt
To enable live visualization of the islands of the agreement during training/val/test, set the flag --plot_islands
.
@inproceedings{garau2022interpretable,
title={Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks},
author={Garau, Nicola and Bisagno, Niccol{\`o} and Sambugaro, Zeno and Conci, Nicola},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13689--13698},
year={2022}
}
- Theoretical idea by Geoffrey Hinton
- Base GLOM network structure inspired by lucidrains implementation
- Convolutional tokenizer inspired by isaaccorley ConvMLP implementation
- Various implementation ideas inspired by Yannic Kilcher's GLOM explanation