Replies: 4 comments
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> abdullah.tayyab |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> abdullah.tayyab
[January 10, 2021, 9:15pm]
Hello,
I am training a dataset for Urdu in the native text and successfully
used transfer learning from the English pretrained model to achieve a
loss of 36.299953 after 80 epochs on this data. I want to further
improve on this by adjusting the parameters and applying some
augmentation through Deep Speech.
The one big question I have is that if we are 'continuing' training, why
is the new best validating model saved if it is not better than the one
used in the previous run?
The other question I have is what techniques can we use to decrease this
loss rate? This is the command I am using to 'continue' training.
python3 DeepSpeech.py slash
/ slash $HOME/DeepSpeech/dataset/trained_load_checkpoint slash
/ slash $HOME/DeepSpeech/dataset/trained_load_checkpoint slash
I now want to adjust the parameters to continue and try to improve the
loss.
How much difference would one form of augmentation alone make to our
data? Or would it be more useful to use multiple augmentations together
in the same run?
I know you can't 'think' for me but I am looking for a pointer to try
and improve this. Will running the same data set (around 60 hours)
produce better loss with different augmentation combinations?
The WER at 80 epochs is around 58% with a loss of 36.3. The training
loss is at 32. Both continue to decrease so I know it is not overfitting
and continuing training will reduce this a bit.
On other data sets, the training loss continues to decrease but
validation loss starts increasing - based on other forum questions, that
is overfitting, is my understanding correct?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/continuing-training-saving-new-best-checkpoint]
Beta Was this translation helpful? Give feedback.
All reactions