Multiple pipelines within a subpass #12
-
If different materials can have different shaders and pipelines, how do you switch between them in a |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Screen 13 doesn't put any sort of restrictions on these techniques beyond Vulkan; so you are free to choose what pattern you might want here:
For multiple pipelines, you might keep the same color attachments around and just render on top of the previous results - to do that draw with one pipeline and then just complete that pass, begin a new one, bind the second pipeline, and then re-do the "attach(0, my_img)"" builder functions so that the new pipeline draws into the existing textures. The key thing behind your question might be LOAD/STORE ops between passes so that you can accumulate rendering ops. Does this clarify things? I can get on a Zoom call and help - send me an email [email protected] See docs: |
Beta Was this translation helpful? Give feedback.
-
This is the expected behavior; the two passes ( More generally the "efficiency" problem I think is a general problem in the industry; lately I have been very excited about GPU-driven rendering and those approaches, but other people have gone into bindless and using shaders to lookup data dynamically. Screen 13 supports all these techniques and does not impose a requirement that hardware supports bindless, because the provided descriptors are automatically pooled and allocated per call. Two ideas to approach the efficiency topic:
Also I think most people who are not Unity or Unreal just assign a target GPU usage and consider that okay, which of course everyone writing a metaverse lately seems like this is now a common problem? |
Beta Was this translation helpful? Give feedback.
This is the expected behavior; the two passes (
"Triangle Example"
and"Triangle Example 2"
) are merged into onevk::RenderPass
as shown, and when submitted Screen 13 executes the first pass followed by each next pass.More generally the "efficiency" problem I think is a general problem in the industry; lately I have been very excited about GPU-driven rendering and those approaches, but other people have gone into bindless and using shaders to lookup data dynamically.
Screen 13 supports all these techniques and does not impose a requirement that hardware supports bindless, because the provided descriptors are automatically pooled and allocated per call.
Two ideas to approach the efficiency…