This is the official implementation of our paper "Improving Perceptual Quality by Phone-Fortified Perceptual Loss using Wasserstein Distance for Speech" Enhancement"
- pytorch 1.6
- torchcontrib 0.0.2
- torchaudio 0.6.0
- pesq 0.0.1
- colorama 0.4.3
- fairseq 0.9.0
- geomloss 0.2.3
Please download the model weights from here, and put the weight file into the PFPL-W
and PFPL
folder, respectively.
The wav2vec pre-trained model can be found in the official repo.
The Voice Bank--Demand Dataset is not provided by this repository. Please download the dataset and build your own PyTorch dataloader from here.
For each .wav
file, you need to first convert it into 16kHz format by any audio converter (e.g., sox).
sox <48K.wav> -r 16000 -c 1 -b 16 <16k.wav>
To train the model, please run the following script. The full training process apporximately consumes 19GB of GPU vram. Reduce the batch size if needed.
python main.py \
--exp_dir <root/dir/of/experiment> \
--exp_name <name_of_the_experiment> \
--data_dir <root/dir/of/dataset> \
--num_workers 16 \
--cuda \
--log_interval 100 \
--batch_size 28 \
--learning_rate 0.0001 \
--num_epochs 100 \
--clip_grad_norm_val 0 \
--grad_accumulate_batches 1 \
--n_fft 512 \
--hop_length 128 \
--model_type wav2vec \
--log_grad_norm
To generate the enhanced sound files, please run:
python generate.py <path/to/PFPL_or_PFPL-W> <path/to/output/dir>
This project is licensed under the MIT License - see the LICENSE file for details
- Bio-ASP Lab, CITI, Academia Sinica, Taipei, Taiwan