Click here for file information
To retrain DPM, you can either follow the steps below or download the author's pre-trained weights directly.
- Generate distorted images. Relevant descriptions can be found below.
- Train the distortion perception model by ensuring the correct dataset path and executing the following command:
bash train_dpm.sh
- After training is complete, move the best pre-trained weights to folder 'pretrained_model'.
Next, evaluate the proposed model on IQA datasets using the following steps:
- For a single-dataset test, please refer to the configs.py file for additional parameters. Then, execute the following command:
bash train.py
- For a cross-dataset test, please refer to the configs.py file for additional parameters. Then, execute the following command:
bash train_cross_datast.sh
- The study generated 30 types of distorted images, of which 25 are identical to those in the KADID-10k dataset and can be obtained by running the 'dataset_generator.m' file in Matlab.
- The additional four types of distorted images, namely pink noise, contrast change, underexposure, and overexposure, can be generated by running the 'additional_dataset_generator.m' file.
- The lossy compression distorted images can be downloaded from this link.
Dataset has 6 million images and needs 3TB storage. Pre-trained models can be downloaded if needed.
The pre-trained DPM models can be downloaded from Google Drive or Baidu Cloud and save it to folder 'pretrained_model'.
If our research has been helpful to you, please consider citing our paper in your work.
@ARTICLE{vipnet2023_wang,
author={Wang, Xiaoqi and Xiong, Jian and Lin, Weisi},
journal={IEEE Transactions on Multimedia},
title={Visual Interaction Perceptual Network for Blind Image Quality Assessment},
year={2023},
volume={25},
number={},
pages={8958-8971},
doi={10.1109/TMM.2023.3243683}}
Thanks to the contributors of GitHub repositories HyperIQA and BoTNet, whose parts of the code I referenced while developing this project.