Replies: 1 comment 1 reply
-
From what you described, there is nothing wrong with what you're doing. I would double check with something like renderdoc (using a custom capture: |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For brief context: I'm trying to make a simulation of the heat equation using wgpu, where every step is calculated using a compute shader in a double-buffered scheme. The result is then rendered in real-time.
Initially what I wanted was for the render and compute steps to be performed in separate threads, but that seems to be impossible/too difficult in the current state of wgpu. But now my implementation is single-threaded (again) and is performed within a single event loop sequentially: one compute step, then a render step every time there's an
Event::RedrawRequested
. I do not perform polling on either steps to wait for the execution of the command buffers. The surface is configured withPresentMode::Fifo
for VSync.To give an overall idea, this is what is currently implemented:
TEXTURE_BINDING
andSTORAGE_BINDING
usages enabled. A is initialized with some noise.iteration
is incremented after every compute step, and it defines the direction in which both steps are performed (forward/backward)Here's a diagram which explains it more succintly:
The current issue is that many of the compute steps are being "dropped" between render steps, i.e. their command buffers are enqueued and submitted but are not being executed before their respective render step, which creates a situation in which most of the GPU work is just rendering, and not computing. This leads me to believe that I am not using the wgpu API the way it was intended. Is there a better way to do this?
For reference, here's a Firefox visualization of a profiling trace using wgpu-profiler:
Beta Was this translation helpful? Give feedback.
All reactions