Skip to content

Implementation of CVPR2022 "Self-augmented Unpaired Image Dehazing via Density and Depth Decomposition"

Notifications You must be signed in to change notification settings

XGuoTJU/D4-Self-augmented-Unpair-Dehazing

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

D4: Self-augmented Unpaired Image Dehazing via Density and Depth Decomposition

This is the PyTorch implementation of paper 'Self-augmented Unpaired Image Dehazing via Density and Depth Decomposition', which is accepted by CVPR2022.

Introduction

In this paper, we propose a self-augmented image dehazing framework, termed D4 (Dehazing via Decomposing transmission map into Density and Depth) for haze generation and removal. Instead of merely estimating transmission maps or clean content, the proposed framework focuses on exploring scattering coefficient and depth information contained in hazy and clean images. With estimated scene depth, our method is capable of re-rendering hazy images with different thicknesses which further benefits the training of the dehazing network.

image

Prerequisites

  • Python 3.7
  • Pytorch 1.10
  • NVIDIA GPU + CUDA cuDNN

Installation

  • Clone this repo:
git clone https://github.com/YaN9-Y/D4
cd D4-master
  • Install Pytorch
  • Install python requirements:
pip install -r requirements.txt

Datasets

1.Testing

We used SOTS-indoor, SOTS-outdoor and I-HAZE for testing.

After downloading the dataset, please use scripts/flist.py to generate the file lists. For example, to generate the training set file list on the SOTS-indoor testset, you should run:

python scripts/flist.py --path path_to_SOTS_indoor_hazy_path --output ./datasets/sots_test_hazy_indoor.flist

And then fill the path of ground truth images in the config file.

Please notice that the ground truth images of SOTS-indoor have additional white border, you can crop it first.

2.Training

For training, we used ITS dataset, you can follow the operations above to generate the training file lists.

python scripts/flist.py --path ITS_train_hazy_path --output ./datasets/its_train_hazy.flist
python scripts/flist.py --path ITS_train_gt_path --output ./datasets/its_train_gt.flist

Getting Started

To use the pre-trained models, download it from the following link then copy it to corresponding checkpoints folder, like ./checkpoints/quick_test

Pretrained model

0.Quick Testing

To hold a quick-testing of our dehazing model, download our pre-trained model and put it into checkpoints/quick_test, then run:

python3 test.py --model 1 --checkpoints ./checkpoints/quick_test

and check the results in 'checkpoints/quick_test/results'

If you want to see the depth estimation and haze generation results, change the TEST_MODE term from pair_test to clean, then run the same command.

1.Training

1)Prepair the SOTS-indoor training datasets following the operations in the Dataset part. 2)Add a config file 'config.yml' in your checkpoints folder. We provide a example checkpoints folder and config file in ./checkpoints/train_example 3)Train the model, for example:

python train.py --model 1 --checkpoints ./checkpoints/train_example

2. Testing

1)Prepair the testing datasets following the operations in the Dataset part. 2)Put the trained weight in the checkpoint folder 2)Add a config file 'config.yml' in your checkpoints folder. We provide a example checkpoints folder and config file in ./checkpoints/test_example 3)Test the model, for example:

python test.py --model 1 --checkpoints ./checkpoints/test_example

Limitation

We found that our model is sensitive to training data. The training may be unstable on images that has a wide variation on depth.

Citation

if you find our work useful, please cite:

@inproceedings{yang2022self,
  title={Self-augmented Unpaired Image Dehazing via Density and Depth Decomposition},
  author={Yang, Yang and Wang, Chaoyue and Liu, Risheng and Zhang, Lin and Guo, Xiaojie and Tao, Dacheng},
  booktitle={CVPR},
  year={2022}
}

About

Implementation of CVPR2022 "Self-augmented Unpaired Image Dehazing via Density and Depth Decomposition"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Shell 0.4%