diff --git a/README.md b/README.md index 429b4d8b..69a6d82b 100644 --- a/README.md +++ b/README.md @@ -181,7 +181,9 @@ python run_training_pipeline.py ``` You can supply any of the following arguments, but don't have to (although for training you should definitely specify at -least a GPU ID). +least a GPU ID). With this version of the toolkit, we recommend that you download the meta-checkpoint of the model you want +to train and fine-tune from it to significantly decrease need for data, time for training and chance of success if your data +is not very high quality. ``` --gpu_id @@ -279,15 +281,6 @@ Here are a few points that were brought up by users: --- -## Example Pipelines available - -| Dataset | Language | Single or Multi | TransformerTTS | Tacotron 2 | FastSpeech 2 | -| -------------------------------------------------------------------------------------|----------|-----------------|:--------------:|:---------:|:-----------:| -| [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) | English | Single Speaker | ✅ | ✅ |✅ | -| [Nancy Krebs](https://www.cstr.ed.ac.uk/projects/blizzard/2011/lessac_blizzard2011/) | English | Single Speaker | ✅ | ✅ |✅ | - ---- - This toolkit has been written by Florian Lux (except for the pytorch modules taken from [ESPnet](https://github.com/espnet/espnet) and [ParallelWaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN), as mentioned above), so if you come across problems