Please follow the instruction in INSTALL.md to install and use this repo.
For installation problems, please consult issues in maskrcnn-benchmark .
This code is tested under Debian 11 with Python 3.9 and PyTorch 1.12.0.
The datasets used in the repository can be downloaded from the following links:
The datasets should be organized in the following structure.
datasets/
├── cityscapes
│ ├── annotations
│ ├── gtFine
│ └── leftImg8bit
└── foggy_cityscapes
├── annotations
├── gtFine
└── leftImg8bit_foggy
The annotations should be processed with convert_cityscapes_to_coco.py and convert_foggy_cityscapes_to_coco.py to be converted into coco format.
For experiments in our paper, we use the following script to run Cityscapes to Foggy Cityscapes adaptation task:
python tools/train_net.py --config-file configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_FPN_masking_cs.yaml
The trained model could be evaluated with the following script:
python tools/test_net.py --config-file "configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_FPN_masking_cs.yaml" MODEL.WEIGHT <path_to_store_weight>/model_final.pth
Below, we provide the checkpoint of MIC(SADA) for Cityscapes→Foggy Cityscapes, which is used in the paper.
The most relevant files for MIC are:
- configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_FPN_masking_cs.yaml: Definition of the experiment configurations in our paper.
- tools/train_net.py: Training script for UDA with MIC(sa-da-faster).
- maskrcnn_benchmark/engine/trainer.py: Training process for UDA with MIC(sa-da-faster).
- maskrcnn_benchmark/modeling/masking.py: Implementation of MIC.
- maskrcnn_benchmark/modeling/teacher.py: Implementation of the EMA teacher.
MIC for object detection is based on the following open-source projects. We thank their authors for making the source code publicly available.