Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About PCA-GAN #1

Open
douhaoexia opened this issue Feb 26, 2019 · 2 comments
Open

About PCA-GAN #1

douhaoexia opened this issue Feb 26, 2019 · 2 comments

Comments

@douhaoexia
Copy link

Hello~
Could you explain what a role the PCA plays in your PCA-GAN model ? Does it work after the generator?
Any related papers?

Thank you very much!

@mysterefrank
Copy link
Owner

mysterefrank commented Aug 6, 2019

The idea behind this project was that GAN training is finicky and that adding a curriculum (progressive GAN) could help stabilize it. You can imagine two ways to add a curriculum to GAN training: starting with a simplified dataset and complexifying it through training is one and starting with a simple discriminator and adding capacity to it as training progresses is another.
PCA-GAN takes the first approach because imo it's easier to spec dataset complexity than model capacity. You can again imagine two ways to complexify dataset through training - one is simply to add a bunch of gaussian noise to the data and shrink the variance that the noise is sampled from as training progresses. The other is to somehow add axes of variation as training progresses, as in train the GAN to understand color first then shapes then textures etc... You can sort of do this using PCA.
The procedure is - run PCA on full image dataset. At epoch one train generator to fit the first principle component of dataset. Second epoch, fit the first two, third the first three, etc... until the generator can fit the full complexity of the dataset.
It didn't work very well.
Others have since done similar things better: https://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/
The Variational Discriminator Bottleneck tweaks the capacity of D through training: https://arxiv.org/abs/1810.00821

@douhaoexia
Copy link
Author

Thank you for your detailed reply!
Actually, i had a similar idea before and was looking for existed works when i leaved the message here. I applied it to the image super-resolution task. But the training is not stable and the results i obtained are not good enough. For several mouths i have tried to find extra loss or tricks to regularize it. Hope i can find a breakthrough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants