This repository contains the official Pytorch implementation of training & evaluation code for LACFormer.
- Creating a virtual environment in terminal:
conda create -n LACFormer python=3.8.16
- Install
CUDA 11.3
andpytorch 1.8.1
:conda install pytorch==1.8.1 torchvision==0.9.1 torchaudio==0.8.1 cudatoolkit=11.3 -c pytorch -c conda-forge
- Install other requirements:
pip install -r requirements.txt
Downloading necessary data:
For Experiment
in our paper:
- Download testing dataset, move it into
Dataset/
and extract the file zip, which can be found in this download link (Google Drive). - Download training dataset, move it into
Dataset/
and extract the file zip, which can be found in this download link (Google Drive).
Download MiT's pretrained weights on ImageNet-1K, and put them in a folder pretrained/
.
Config hyper-parameters in mcode/config.py
and run train.py
for training:
python train.py
Here is an example in Google Colab
After training, evaluation will be done automatically
The checkpoint for LACFormer-L can be downloaded from here