Skip to content

Latest commit

 

History

History

det

MIC for Domain-Adaptive Object Detection

Getting started

Installation

Please follow the instruction in INSTALL.md to install and use this repo.

For installation problems, please consult issues in maskrcnn-benchmark .

This code is tested under Debian 11 with Python 3.9 and PyTorch 1.12.0.

Datasets

The datasets used in the repository can be downloaded from the following links:

The datasets should be organized in the following structure.

datasets/
├── cityscapes
│   ├── annotations
│   ├── gtFine
│   └── leftImg8bit
└── foggy_cityscapes
    ├── annotations
    ├── gtFine
    └── leftImg8bit_foggy

The annotations should be processed with convert_cityscapes_to_coco.py and convert_foggy_cityscapes_to_coco.py to be converted into coco format.

Training

For experiments in our paper, we use the following script to run Cityscapes to Foggy Cityscapes adaptation task:

python tools/train_net.py --config-file configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_FPN_masking_cs.yaml

Testing

The trained model could be evaluated with the following script:

python tools/test_net.py --config-file "configs/da_faster_rcnn/e2e_da_faster_rcnn_R_50_FPN_masking_cs.yaml" MODEL.WEIGHT <path_to_store_weight>/model_final.pth

Checkpoints

Below, we provide the checkpoint of MIC(SADA) for Cityscapes→Foggy Cityscapes, which is used in the paper.

Where to find MIC in the code?

The most relevant files for MIC are:

Acknowledgements

MIC for object detection is based on the following open-source projects. We thank their authors for making the source code publicly available.