-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is no rendered video when run locally. #30
Comments
Yes, eval.py does not render video at all. If you want to render video, you should go to the CoLab and find the Jupyter Notebook, whose website is But there are some difficuties when I tried to run it on the CoLab online. So I download it and rewrote it into python file by my self. Then I rendered my video on my local mechine successfully. If you have any trouble when running HyperNeRF, feel free to communicate. |
@wangrun20 somehow i rendered video, but the output looks like incapable of representing my face. Could you recommend how can i fix it? Is there a specific guideline how to take video? download.mp4 |
In my experiment, although I was unable to achieve the reconstruction performance claimed by the paper's authors, I also gained some insights. It is best for the input video to have a clean, clear background, because the HyperNeRF code first preprocesses the video with COLMAP, matching feature points of video frames. If the background is too cluttered, it might lead to incorrect positioning, and consequently, the reconstruction results would also be incorrect. |
Hello everyone,
Can someone help me to run this code locally on my computer? I run train.py and eval.py for my own dataset and my render folder is empty. actually there is no code for render video in eval.py!
The text was updated successfully, but these errors were encountered: