Measure NeRF performance through Python #968
Unanswered
mcourteaux
asked this question in
Q&A
Replies: 2 comments
-
Hi, a significant portion of As for the second part of your question: my points of attack would be
|
Beta Was this translation helpful? Give feedback.
0 replies
-
On a similar note, what would be more efficient way to render an image in c++ and convert this it to a Mat (opencv) format? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I know the title sounds horrible, but that's what I'm trying to do.
I would like to extract timings from rendering a frame, but I can't seem to get good results. I open the testbed with the window and resize the window to 512x512, and for my model, it renders at 30ms per frame (1 spp). When I run the python script that calls
testbed.render(view, width=512, height=512)
, I always get results around 45ms. I even tried rendering 200 samples, to be sure that your code is occupying GPU fully, and no Python garbage is slowing anything down, and divide the obtained timing by 200, and that consistently gets me 45ms per frame.Any clues what might be going on? Why does it perform (according to the ImGui interface) better if the testbed runs in windowed mode than headless? Bugs in timing, subtle settings I'm overlooking?
As a second question, could you (as an expert) propose a config JSON that aims to perform as fast as possible for NeRF (and is still somewhat decent in quality)? I can fiddle with the numbers, and I got the times to be cut in half, which is nice, but I have little technical expertise here to make smart decisions on what is the slowest part that produces the best speedups for the least quality penalty.
Beta Was this translation helpful? Give feedback.
All reactions