You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned in S3, the pre-trained models are always trained on the same data as the full model (yet I do not know the pre-training details), and specially the pre-trained VSR model has exactly the same architecture as the full one. So, I wonder why the supervised signals (e.g., intermediate representations) from pre-trained VSR still make sense. Could you give in-depth explanations?
The text was updated successfully, but these errors were encountered:
As mentioned in S3, the pre-trained models are always trained on the same data as the full model (yet I do not know the pre-training details), and specially the pre-trained VSR model has exactly the same architecture as the full one. So, I wonder why the supervised signals (e.g., intermediate representations) from pre-trained VSR still make sense. Could you give in-depth explanations?
The text was updated successfully, but these errors were encountered: