Instance-Warp: Saliency Guided Image Warping for Unsupervised Domain Adaptation
Shen Zheng, Anurag Ghosh, Srinivasa Narasimhan
In WACV 2025
See Night-Object-Detection/README.md
Clone this repo recursively
git clone --recurse-submodules https://github.com/ShenZheng2000/Instance-Warp
To train semantic segmentation models using our instance-level warping techinique, you need to download the JSON file that contains the bounding boxes. Download link is [here]
Folder Preparation
Run mkdir outs
in the terminal to create a directory where logs from nohup
can be saved.
Symbolic Link
Assuming your current folder's ABSOLUTE path is XXX
, proceed with the following steps to create a symbolic link for the code related to warping, which is necessary for semantic segmentation models, including DAFormer and MIC.
NOTE: Prior to advancing with semantic segmentation, ensure that the detection code is functioning properly.
ln -s XXX/Night-Object-Detection/twophase/data/transforms XXX/MIC/seg/mmseg/transforms
ln -s XXX/Night-Object-Detection/twophase/data/transforms XXX/DAFormer/mmseg/transforms
Build Environment
For this project, we used python 3.7. We recommend setting up a new virtual environment:
conda create -n CMDA python=3.7
conda activate CMDA
Install Packages
In that environment, the requirements can be installed with:
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install kornia==0.5.8
pip install -U openmim
mim install mmcv-full==1.3.7
Download Weights
Further, please download the MiT weights from here
All experiments were executed on a NVIDIA RTX 4090 Ti.
Dataset Preparations
Cityscapes: Please, download leftImg8bit_trainvaltest.zip
and
gt_trainvaltest.zip
from here
and extract them to $data_path/cityscapes
.
Foggy Cityscapes: Please, download leftImg8bit_trainvaltest_foggy.zip
and
gt_trainvaltest.zip
from here
and extract them to $data_path/foggy_cityscapes
.
GTA: Please, download all image and label packages from
here and extract
them to $data_path/gta
.
Synthia: Please, download SYNTHIA-RAND-CITYSCAPES
from
here and extract it to $data_path/synthia
.
ACDC: Please, download rgb_anon_trainvaltest.zip
and
gt_trainval.zip
from here and
extract them to $data_path/acdc
. Further, please restructure the folders from
condition/split/sequence/
to split/
using the following commands:
cd $data_path
rsync -a acdc/rgb_anon/*/train/*/* acdc/rgb_anon/train/
rsync -a acdc/rgb_anon/*/val/*/* acdc/rgb_anon/val/
rsync -a acdc/gt/*/train/*/*_labelTrainIds.png acdc/gt/train/
rsync -a acdc/gt/*/val/*/*_labelTrainIds.png acdc/gt/val/
Dark Zurich: Please, download the Dark_Zurich_train_anon.zip
and Dark_Zurich_val_anon.zip
from
here and extract it
to $data_path/dark_zurich
.
Dataset Folder Structures
DAFormer ├── ... ├── $data_path │ ├── acdc │ │ ├── gt │ │ │ ├── train │ │ │ ├── val │ │ ├── rgb_anon │ │ │ ├── train │ │ │ ├── val │ ├── cityscapes │ │ ├── leftImg8bit │ │ │ ├── train │ │ │ ├── val │ │ ├── gtFine │ │ │ ├── train │ │ │ ├── val │ ├── dark_zurich │ │ ├── gt │ │ │ ├── val │ │ ├── rgb_anon │ │ │ ├── train │ │ │ ├── val │ ├── gta │ │ ├── images │ │ ├── labels │ ├── synthia │ │ ├── RGB │ │ ├── GT │ │ │ ├── LABELS │ ├── foggy_cityscapes │ │ ├── leftImg8bit_foggy │ │ │ ├── train │ │ │ ├── val │ │ ├── gtFine │ │ │ ├── train │ │ │ ├── val
Data Preprocessing
Finally, please run the following scripts to convert the label IDs to the train IDs and to generate the class index for RCS:
python tools/convert_datasets/gta.py $data_path/gta --nproc 8
python tools/convert_datasets/cityscapes.py $data_path/cityscapes --nproc 8
python tools/convert_datasets/synthia.py $data_path/synthia/ --nproc 8
Models can be tested after the training has finished:
sh test.sh path/to/checkpoint_directory
The results for Cityscapes→ACDC and Cityscapes→DarkZurich are reported on the test split of the target dataset. To generate the predictions for the test set, please:
- Generate test set predictions using:
bash test_test.sh path/to/checkpoint_directory
- Submit to ACDC or DarkZurich public evaluation server to obtain the scores.
See utility scripts in [here]