Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encountered some difficulties in reproducing #20

Open
zjumsj opened this issue Oct 16, 2024 · 3 comments
Open

Encountered some difficulties in reproducing #20

zjumsj opened this issue Oct 16, 2024 · 3 comments

Comments

@zjumsj
Copy link

zjumsj commented Oct 16, 2024

Dear Author,

I would like to express my sincere admiration and gratitude for your outstanding work. The relighting results from the pretrained head model of subject AXE977 are impressive. I would like to know the specific settings under which you trained this pretrained model.

I noticed that the pretrained model comes with a config.yml file, and based on its contents, I speculate that the parameters you used might be batch=4 and iters=600,000. However, I am having difficulty reproducing the results and am unsure where I might have gone wrong. Here are my training details:

GPU: One A800
CUDA Version: 11.8
Batch Size: 4
Iterations: 600,000

merge_

Additionally, I noticed many warnings like below. Is that expected behavior?

屏幕截图 2024-10-16 134846

Thank you for your assistance.

@zjumsj zjumsj changed the title How did you get the pretrained model? Encountered some difficulties in reproducing Oct 21, 2024
@una-dinosauria
Copy link
Contributor

The logger errors for missing data are expected. Unfortunately I am unsure of what went wrong with the rest of the training. These kinds of artifacts aren't expected at all, and I have recently re-trained all the models myself with different results (as you show).

Does this error reproduce if you train the model again? It could be a bad run with poor convergence -- these happens once in a while.

@zjumsj
Copy link
Author

zjumsj commented Oct 21, 2024

Thank you for your reply! I haven't made multiple attempts, as training takes several days. I am going to run it a few more times as suggested to see if the issue persists. Thank you again for your assistance, and for your outstanding work!

@una-dinosauria
Copy link
Contributor

The quality seems very degraded, you may consider training for 100k iterations and see if these artifacts are still present (they shouldn't be).

At 100k things should look a bit blurred, but not too far off the final result otherwise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants