You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, the default batch size is 1, did you try the larger batch size. In my experiments, when the batch size >= 4, the loss can not be converged to a satisfied results despite tuning various hyper-parameters.
The text was updated successfully, but these errors were encountered:
Hi, I didn't try that in our experiments. I think this VQ-Stuff is unstable in terms of training, thus a 'proper' combination of these hyper-parameters is a must to make it work.
I meet the same problem!I trained VAE with batch size 1, but the results seems to be wrong when test with batch size 8. This does not make sense because batch_size should not affect test performance. I think there might be something wrong with the view() operation in VectorQuantizer
Hi, the default batch size is 1, did you try the larger batch size. In my experiments, when the batch size >= 4, the loss can not be converged to a satisfied results despite tuning various hyper-parameters.
The text was updated successfully, but these errors were encountered: