Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
TianheWu authored Mar 10, 2023
1 parent 698df55 commit c0f22d9
Showing 1 changed file with 32 additions and 20 deletions.
52 changes: 32 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,37 +33,49 @@ This repository is the official PyTorch implementation of MANIQA: Multi-dimensio
---

> *No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference.*


[![pretrained model](https://img.shields.io/badge/Model-PIPAL22_checkpoint-yellow.svg)](https://github.com/IIGROUP/MANIQA/releases/tag/PIPAL22-VALID-CKPT)

---

## Network Architecture
![image.png](image/pipeline.png)

## Dataset
The training set is [PIPAL22](https://codalab.lisn.upsaclay.fr/competitions/1568#participate-get_data) and the validation dataset is [PIPAL21](https://competitions.codalab.org/competitions/28050#participate). We have conducted experiments on [LIVE](https://live.ece.utexas.edu/research/Quality/subjective.htm), [CSIQ](https://qualinet.github.io/databases/image/categorical_image_quality_csiq_database/), [TID2013](https://qualinet.github.io/databases/image/tampere_image_database_tid2013/) and [KADID-10K](http://database.mmsp-kn.de/kadid-10k-database.html) datasets.

**NOTE:**
+ Put the MOS label and the data python files into **data** folder.
+ The validation dataset comes from NTIRE 2021. If you want to reproduce the results on validation or test set for NTIRE 2022 NR-IQA competition, register the competition and upload the submission.zip by following the instruction on the [website](https://codalab.lisn.upsaclay.fr/competitions/1568#participate).

## Training
Training MANIQA model:
The [PIPAL22](https://codalab.lisn.upsaclay.fr/competitions/1568#participate-get_data) dataset is used in NTIRE22 competition and we test our model in [PIPAL21](https://competitions.codalab.org/competitions/28050#participate).
We also conducted experiments on [LIVE](https://live.ece.utexas.edu/research/Quality/subjective.htm), [CSIQ](https://qualinet.github.io/databases/image/categorical_image_quality_csiq_database/), [TID2013](https://qualinet.github.io/databases/image/tampere_image_database_tid2013/) and [KADID-10K](http://database.mmsp-kn.de/kadid-10k-database.html) datasets.

**Attention:**
- Put the MOS label and the data python files into **data** folder.
- The validation dataset comes from NTIRE 2021. If you want to reproduce the results on validation or test set for NTIRE 2022 NR-IQA competition, register the competition and upload the submission.zip by following the instruction on the [website](https://codalab.lisn.upsaclay.fr/competitions/1568#participate).

## Checkpoints
| Training Set | Testing Set| Pretrained Model of MANIQA |
| :---: | :---: |:---: |
|[PIPAL2022](https://codalab.lisn.upsaclay.fr/competitions/1568#participate-get_data) dataset (200 reference images, 23200 distortion images, MOS scores for each distortion image) | [Validation] [PIPAL2022](https://codalab.lisn.upsaclay.fr/competitions/1568#participate-get_data) dataset (1650 distortion images) |[![pretrained model](https://img.shields.io/badge/Model-PIPAL22_checkpoint-yellow.svg)](https://github.com/IIGROUP/MANIQA/releases/tag/PIPAL22-VALID-CKPT)|
| [KADID-10K](http://database.mmsp-kn.de/kadid-10k-database.html) dataset (81 reference images and 10125 distorted images). 8000 distorted images for training | [KADID-10K](http://database.mmsp-kn.de/kadid-10k-database.html) dataset. 2125 distorted images for testing |[![pretrained model](https://img.shields.io/badge/Model-KADID-10K_checkpoint-yellow.svg)](https://github.com/IIGROUP/MANIQA/releases/tag/PIPAL22-VALID-CKPT)|

## Usage
### Training MANIQA model
- Modify "dataset_name" in config
- Modify train dataset path: "train_dis_path"
- Modify validation dataset path: "val_dis_path"
```
# Modify train dataset path (PIPAL21 training dataset (PIPAL21 training dataset is same as PIPAL22)): "train_dis_path"
# Modify validation dataset path (PIPAL21 validation dataset): "val_dis_path"
python train_maniqa.py
```
## Inference for [PIPAL22](https://codalab.lisn.upsaclay.fr/competitions/1568#participate-get_data) Validing and Testing
### Predicting one image quality score
- Modify the path of image "image_path"
- Modify the path of checkpoint "ckpt_path"
```
python predict_one_image.py
```
### Inference for [PIPAL22](https://codalab.lisn.upsaclay.fr/competitions/1568#participate-get_data) validing and testing
Generating the ouput file:
- Modify the path of dataset "test_dis_path"
- Modify the trained model path "model_path"
```
# Modify the path of dataset "test_dis_path"
# Modify the trained model path "model_path"
python inference.py
```



## Results
![image.png](image/results.png)

Expand Down Expand Up @@ -92,7 +104,7 @@ pip install -r requirements.txt
```

## Acknowledgment
Our codes partially borrowed from [anse3832](https://github.com/anse3832/MUSIQ) and [timm](https://github.com/rwightman/pytorch-image-models).
Our codes partially borrowed from [anse3832](https://github.com/anse3832/MUSIQ) and [timm](https://github.com/rwightman/pytorch-image-models). Thanks the [SwinIR](https://github.com/JingyunLiang/SwinIR) Readme.md. We modify our file like them.

## Related Work
### NTIRE2021 IQA Full-Reference Competition
Expand Down

0 comments on commit c0f22d9

Please sign in to comment.