You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 29, 2023. It is now read-only.
This is where, with some random chance, we give the RNN like ConvLSTM the ground truth label when its generating sequences in training. This can help with convergenc especially in the beginning, since if the model messes up in the second timestep, all later ones will be useless, but if we give it the GT image to continue the sequence it might still learn better representations.
The text was updated successfully, but these errors were encountered:
The chance that a single output of the RNN is swapped with a GT label decreases somewhat rapidly, like going from nearly 1 near the beginning of training, and dropping down to 0 after 5 or so epochs.
This is where, with some random chance, we give the RNN like ConvLSTM the ground truth label when its generating sequences in training. This can help with convergenc especially in the beginning, since if the model messes up in the second timestep, all later ones will be useless, but if we give it the GT image to continue the sequence it might still learn better representations.
The text was updated successfully, but these errors were encountered: