Replies: 2 comments
-
(For reference, the gt is a one minute dataset, and the pred seems to have leaked from a larger dataset.) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Sorry! It turns out I accidentally set residual_channels to 384, explaining the issue. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! I am currently training a multispeaker using the refactor-v2 repo and the code that has previously given, and it seems that in the tensorboard samples, timbre leakage has occured with the datasets that has the least amount of data tend to leak more from the larger datasets. This behaviour did not happen in the refactor repo, however.
Here are the sample from tensorboard:
2023-05-05_23-18-51.mp4
Beta Was this translation helpful? Give feedback.
All reactions