Skip to content

Latest commit

 

History

History
177 lines (137 loc) · 9.12 KB

README.md

File metadata and controls

177 lines (137 loc) · 9.12 KB

Teaching Tailored to Talent: Adverse Weather Restoration via Prompt Pool and Depth-Anything Constraint (ECCV'2024)

Sixiang Chen1    Tian Ye1    Kai Zhang1    Zhaohu Xing1    Yunlong Lin2    Lei Zhu1,3 ✉️   

1The Hong Kong University of Science and Technology (Guangzhou)    2Xiamen University   
3The Hong Kong University of Science and Technology   

European Conference on Computer Vision (ECCV), 2024, MiCo Milano


paper supplement project

🔥 News

  • 2024.09.25: ✅ Release our code, pretrained models and visual results, welcome to test the performance.
  • 2024.09.24: ✅ Release our manuscript.
  • 2024.7.26: This repo is created.

Abstract

Recent advancements in adverse weather restoration have shown potential, yet the unpredictable and varied combinations of weather degradations in the real world pose significant challenges. Previous methods typically struggle with dynamically handling intricate degradation combinations and carrying on background reconstruction precisely, leading to performance and generalization limitations. Drawing inspiration from prompt learning and the "Teaching Tailored to Talent" concept, we introduce a novel pipeline, T3-DiffWeather. Specifically, we employ a prompt pool that allows the network to autonomously combine sub-prompts to construct weather-prompts, harnessing the necessary attributes to adaptively tackle unforeseen weather input. Moreover, from a scene modeling perspective, we incorporate general prompts constrained by Depth-Anything feature to provide the scene-specific condition for the diffusion process. Furthermore, by incorporating contrastive prompt loss, we ensures distinctive representations for both types of prompts by a mutual pushing strategy. Experimental results demonstrate that our method achieves state-of-the-art performance across various synthetic and real-world datasets, markedly outperforming existing diffusion techniques in terms of computational efficiency.

Overview


Figure 3. The overview of proposed method. (a) showcases our pipeline, which adopts an innovative strategy focused on learning degradation residual and employs the information-rich condition to guide the diffusion process. (b) illustrates the utilization of our prompt pool, which empowers the network to autonomously select attributes needed to construct adaptive weather-prompts. (c) depicts the general prompts directed by Depth-Anything constraint to supply scene information that aids in reconstructing residuals. (d) shows the contrastive prompt loss, which exerts constraints on prompts driven by two distinct motivations, enhancing their representations.

Visual Comparisons

Synthetic (click to expand)
Real (click to expand)

Results

Adverse Weather Restoration (click to expand)
Other Real Datasets (click to expand)
Parameters and GFLOPs (click to expand)

Installation

😆 Our T3-DiffWeather is built in Pytorch=2.0.1, we train and test it on Ubuntu=20.04 environment (Python=3.8+, Cuda=11.6).

For installing, please follow these instructions:

conda create -n py38 python=3.8.16
conda activate py38
pip3 install torch torchvision torchaudio
pip3 install -r requirements.txt  

Dataset

📂 We train our model in the mixed adverse weather data and evaluate it in (Raindrop), (Rainhaze (Test1)) and (Snow100K). The download links of datasets are provided.

Adverse Weather Raindrop Test1 Snow100K
Link Download Download Download Download
Code nxpx ju5a ifbm sacv

Visual Results.

Raindrop Test1 Snow100K-S Snow100K-L
Google Drive Download Download Download Download

Quick Run

🙌 To simply test the demonstration with your own images or real samples, please modify the file path needed for testing in val_data_dir beforehand under configs/allweather_demo.yml, and put the pretrained_model into pretrained folder. Then run the following code:

python test_diffuser_demo.py

Then results will be output to the save path of 'save_images_test_Demo'.

Benchmark Test

🙌 We provide a test script to test the results of the weight file on the benchmarks. Please modify the file path needed for testing in val_data_dir beforehand under configs/allweather_{benchmark}.yml, and put the pretrained_model into pretrained folder. Then you can run the following code to test the performance of PSNR and SSIM:

python test_diffuser_paired.py --config configs/allweather_{benchmark}.yaml

Then results will be output to the save path of 'save_images_test_{benchmark}'.

Training Stage

😋 Our training process is built upon pytorch_lightning, rather than the conventional torch framework. Please run the code below to begin training T3-DiffWeather on Allweather benchmarks. Please modify the file path needed for training in data_dir beforehand and testing in val_data_dir beforehand under configs/allweather_{benchmark}.yml. Example usage to training our model with Test1 benchmark in testing stage of training:

python train_diffusion_pl.py --config configs/allweather_{benchmark}.yaml

The logs and checkpoints are saved in ‘logs‘.

Citation

@InProceedings{chen2024teaching,
    title     = {Teaching Tailored to Talent: Adverse Weather Restoration via Prompt Pool and Depth-Anything Constraint},
    author    = {Chen, Sixiang and Ye, Tian and Zhang, Kai and Xing, Zhaohu and Lin, Yunlong and Zhu, Lei}, 
    booktitle = {European conference on computer vision},
    year      = {2024},
    organization={Springer}
}

Contact

If you have any questions, please contact the email [email protected] or [email protected]