You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an application that produces new data in a thread and then triggers a rerender via ctx.request_repaint();. Fresh data is produced at about 100 Hz. I'm using eframe with the WGPU backend.
The main thread is using a lot of CPU for rendering even when the ui is blank. On Mac about 30% CPU, on Linux it's 100% when running with the display at 60 Hz and vsync on. A perf profile shows it's mostly spinning in some nvidia shared objects, probably waiting for time to pass until the next vsync.
The exact behavior changes based on many factors. If I ctx.request_repaint_after(Duration::from_micros(1)); the CPU usage drops by ~95% but the frame rate is only 45 fps. If I set the display to 120 Hz (display refresh is happening more often than request_repaint), the I observe the same drop.
When I turn off vsync cpu usage also goes down, but the frame rate is unbounded and there is the potential for tearing which isn't ideal either.
If I had to guess the main thread starts spinning when the presentation queue of the Vulkan backend (in the Linux case) is full. Is there a portable way to avoid this that doesn't involve a huge latency penalty?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I have an application that produces new data in a thread and then triggers a rerender via
ctx.request_repaint();
. Fresh data is produced at about 100 Hz. I'm using eframe with the WGPU backend.The main thread is using a lot of CPU for rendering even when the ui is blank. On Mac about 30% CPU, on Linux it's 100% when running with the display at 60 Hz and vsync on. A perf profile shows it's mostly spinning in some nvidia shared objects, probably waiting for time to pass until the next vsync.
The exact behavior changes based on many factors. If I
ctx.request_repaint_after(Duration::from_micros(1));
the CPU usage drops by ~95% but the frame rate is only 45 fps. If I set the display to 120 Hz (display refresh is happening more often than request_repaint), the I observe the same drop.When I turn off vsync cpu usage also goes down, but the frame rate is unbounded and there is the potential for tearing which isn't ideal either.
If I had to guess the main thread starts spinning when the presentation queue of the Vulkan backend (in the Linux case) is full. Is there a portable way to avoid this that doesn't involve a huge latency penalty?
Beta Was this translation helpful? Give feedback.
All reactions