Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues getting stylegan2 results with synface dataset #26

Open
Minsoo2022 opened this issue Jul 9, 2021 · 11 comments
Open

Issues getting stylegan2 results with synface dataset #26

Minsoo2022 opened this issue Jul 9, 2021 · 11 comments

Comments

@Minsoo2022
Copy link

image

Hi, I appreciate your nice work and sharing code.
I suffer from getting the stylegan2 results with synface dataset and pre-trained weights.
As shown in the figure, the generated image from synface latent vector (pt_file['latent']) is different from the provided image (pt_file['img']) from .pt file.
Can you give me some ideas why it happens?

Best regards.

@Minsoo2022
Copy link
Author

000075_image_1_1

Also, both of the above two images (pt_file['img'], the generated image from pt_file['latent']) are different from the original image.

@XingangPan
Copy link
Owner

@Minsoo2022 Hi, thanks for your interest. The provided latent code here is obtained by performing GAN inversion to the testing images. Thus, there could still be some notable differences between the reconstruction and the original image. This means that there is room for improvement in the GAN inversion process itself.

@Minsoo2022
Copy link
Author

Thanks for your answer.
However, as far as I understand, pt_file['img'] is the result of the GAN inversion. Also, pt_file['img'] is slightly different from the original image above. Thus, I think pt_file['img'] and the generated image from pt_file['latent'] should be same. Is it wrong?

@Minsoo2022
Copy link
Author

image

What I wonder is that the real image, the GAN inversion image you provide, and the generated image from the GAN inversion latent vector you provide are all different. As far as I understand, the GAN inversion image should be the same as the generated image from the GAN inversion latent vector. That's the reason why I get confused.

Also, can I ask the argument for the GAN inversion? I tried the GAN inversion (rosinality/stylegan2-pytorch/projector.py) myself, but the quality is lower than what you provided.
Thanks for your kind reply.

Best regards.

@XingangPan
Copy link
Owner

@Minsoo2022 Yes, you are right. pt_file['img'] and the generated image from pt_file['latent'] should be same. It seems there is some inconsistency. Could you please try using truncation=0.7 and see if it works?
Besides, as we perform instance-specific training, it is ok if the latent code is not perfect. As the following training process (especially the latent offset predicted by the latent encoder) could learn to fill this gap.
I am too busy to carefully check out the details right now. I might be able to reply to you 3-4 days later. Sorry about this.

@Minsoo2022
Copy link
Author

Thank you for replying while you are busy.

image
Unfortunately, it still differs when setting truncation=0.7.

I look forward to your reply, and I'll let you know if there's any progress.

Thank you.

@Minsoo2022
Copy link
Author

Sorry for bothering you, but I'm waiting for your answer.

@XingangPan
Copy link
Owner

Sorry about the delay. I will reply to you tomorrow.

@Minsoo2022
Copy link
Author

I really appreciate your effort.

@XingangPan
Copy link
Owner

Hi, I have uploaded my GAN inversion code here: https://drive.google.com/file/d/1pCfnDiHZNnRoEVZ4RhcLfZyPrJgUWEYk/view?usp=sharing
You may check it out and perform the inversion to get the latent code :-)

@Minsoo2022
Copy link
Author

Thank you for your answer.
Your answer helped me a lot.

Best regards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants