Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ray tracing support? #639

Open
NickelCoating opened this issue Oct 7, 2020 · 22 comments
Open

Ray tracing support? #639

NickelCoating opened this issue Oct 7, 2020 · 22 comments
Labels
A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible D-Complex Quite challenging from either a design or technical perspective. Ask for help! S-Needs-Design This issue requires design work to think about how it would best be accomplished

Comments

@NickelCoating
Copy link

I figure it's not a priority. But by the time the engine itself is usable, and anybody uses it for making a commercial game, ray tracing is going to be far more wide spread.

Perhaps, with ray tracing which is all the rage in gaming these days, you could draw more attention to this engine to grow the community a bit faster, too!?

Death of Rasterization Not Only a Gain for AAA Games

I'm not under the impression that ray tracing is meant to co-exist with rasterization for long, not even in indie games. Rasterization is on its way out all together. Nobody will make a AAA game with this engine any time soon, but neither with the Unity engine that but, got some support for ray tracing, too, already. Well, it's coming. Bevy is left behind here.

Ray tracing is absolutely fantastic for indie games that don't use AAA grade assets. And that with no additional effort on the part of the indie dev. The power of such a renderer compensates for the lack of details given by high res textures and models. Think of Minecraft RTX(lower than low poly, lower than low texture res) and Quake 2 RTX(low poly).

Indie devs could make a Wolfenstein 3D clone that doesn't look like ass. Indie games like Stormworks, a game that is low poly, solid color style, and very dynamic in its nature (construction of vehicles), could make excellent use of the perfect soft shadows and GI. Shadowmaps like to run into limits here. And there is indoor and outdoor transitions.

Stormworks: https://store.steampowered.com/app/573090/Stormworks_Build_and_Rescue/

The Competition

Also, there is the Unreal Engine 5. Some indie devs are willing to put up with the UE4 engine that already got ray tracing. But regarding the programming hardships of UE4, they may all disappear with UE5. To make things worse. UE5 can fully scale its high res models. It may be not needed to make low res models for every high res model anymore. This speeds up the asset creation workflow a lot for indies and AAA alike. Unity may actually loose its indie crown to UE5. Bevy is left behind here again by far, not even having RT.

Graphics Card Availability

According to someone at Nvidia ~3 years ago, Every AAA game would require ray tracing in ~5 years. So, that's ~2 years left from now. I can't find that twitter post now, but it's not too far off to my opinion. The 50 gen RTX cards could do that in ~4 years for AAA games.

Up coming consoles got ray tracing support, they are AMD APUs, AMD GPUs will follow very soon.

The only GTX successors to the 10 gen is the 16 gen that is a GTX variant of the RTX 20 gen. And those variant cards only come in 50s and 60s tiers. There is no 70s and 80s tier because GTX is already being phased out. This 30 gen (sadly not confirmed), or perhaps next 40 gen (likely), GTX actually may go extinct entirely and all Nvidia cards will support ray tracing so be RTX only.

The Nintendo Switch is the only console that doesn't do RT.

Gamers Owning Ray Tracing Cards

Gamers owning particular cards is of course always lagging behind to the actual availability on the market.

As the German saying goes that I once came across. I para quote: "Those who are sentenced to death live the longest!" - "Tote gesagte leben laenger". Because not everybody is after perfect eye candy. May just stick to what they got already. Though, they usually play on consoles, too, but consoles got a new generation with RT support up coming. And they are also in high demand now.

I still got my 1080ti, and many did skip the RTX 20 gen, because of it being very expensive and very slow with ray tracing in particular. That was the first generation and first gen is always slow and expensive. Now, the RTX 30 gen is out and are widely desired. And consoles are much more affordable. Those who don't care about the eye candy, they may still get it with the new XBox and PS5 at a very low price. Though, disclaimer, I don't know how fast the RT cores are on those AMD APUs. Unlike Nvidia, AMD lags behind with its upscaling technology. And actually everything...

I think that in ~2 years there will be the 40 gen release, that's bout the time one actually may release a game made with this engine!?

The 30 gen because of its low price, and high RT performance will saturate the ownership of ray tracing capable cards to a very large extend over the course of 2 years. Games like Minecraft RTX that unlike Quake 2 RTX, is a full conversion, and is played by an extreme large number of gamers, not only drive RT card sales but also increases the appetite for proper light/shadows/reflections in even non AAA grade games. World of Warcraft got an RT update recently.

@Cupnfish
Copy link
Contributor

Cupnfish commented Oct 7, 2020

Ray Tracing on the WGPU seems like a big problem right now. They would need something that supports all of DXR/VKTrace/metal stuff .

@karroffel karroffel added C-Feature A new feature, making something new possible A-Rendering Drawing game state to the screen labels Oct 7, 2020
@memoryruins
Copy link
Contributor

Ray Tracing on the WGPU seems like a big problem right now. They would need something that supports all of DXR/VKTrace/metal stuff .

Relevant wgpu feature request gfx-rs/wgpu-rs#247

@cart
Copy link
Member

cart commented Oct 7, 2020

First: I think raytracing is both very interesting and very much a big part of the future of graphics tech. I want Bevy to support it ... eventually.

That being said, it isn't the present of graphics tech, especially if you account for platforms like mobile and web, but also if you consider that something like 99% of the computers on the planet currently can't raytrace games in real time (I made this number up, feel free to fact check and prove me wrong). And on the platforms/hardware that do support it, its almost without exception a hybrid of traditional techniques and raytracing. This "middle ground" will exist for a long time. Traditional rasterization isn't dead tech, nor will it be for the forsee-able future.

Additionally, Bevy is intentionally modular by design. Most of the things we're building will apply to the raytracing world as well. There is no rush here to double down on raytracing. We won't get left behind.

Reasons I won't personally start building raytracing right now, or make it an official priority:

  1. We still have a lot of engine groundwork to lay before competing on graphical fidelity becomes our top priority
  2. Raytracing is still a niche market that caters to people that bought expensive graphics cards. I would rather focus on more inclusive / broader markets in the short term.
  3. Right now supporting raytracing would probably mean building a custom vulkan backend, which is a huge investment, especially when it means maintaining both that backend and the wgpu backend. It would also mean maintaining raytracing and non-raytracing render paths, which at this stage in the project would limit our ability to experiment.

Ray Tracing on the WGPU seems like a big problem right now. They would need something that supports all of DXR/VKTrace/metal stuff

This isn't necessarily true. They have an "extensions" api specifically for shimming in non-standard features. I have a feeling that wgpu will eventually get raytracing extension(s) for platform-specific raytracing features (but you'd have to ask @kvark about this).

Even if wgpu never gets raytracing support, I'm still comfortable building on it because it gets us so much platform support "for free". If we ever reach a point where we "need" raytracing, we can always build a direct vulkan bevy_render backend. As small performance wins start becoming more important, its very possible we will start building custom platform/api-specific backends anyway.

In summary, yes I want Bevy to support raytracing eventually. No I won't prioritize it in the short term, but if anyone else wants to drive it forward, I won't stop them. Ideally if you want to add raytracing support to Bevy, you would first do it by trying to add a raytracing extension to wgpu (provided this is something they want for the project). This limits the amount of investment / maintenance burden we need to take on at such an early stage of the project, and it also helps the ecosystem as a whole.

@NickelCoating
Copy link
Author

That being said, it isn't the present of graphics tech, especially if you account for platforms like mobile and web, but also if you consider that something like 99% of the computers on the planet currently can't raytrace games in real time (I made this number up, feel free to fact check and prove me wrong).

You are not wrong of course. And mobile should take way longer to adopt RT than desktop/console.

The Steam September 2020 survey shows that 10.96% do have a RT able card:

NVIDIA GeForce RTX 2060 2.89%
NVIDIA GeForce RTX 2070 SUPER 2.14%
NVIDIA GeForce RTX 2070 1.90%
NVIDIA GeForce RTX 2060 SUPER 1.30%
NVIDIA GeForce RTX 2080 0.97%
NVIDIA GeForce RTX 2080 Ti 0.93%
NVIDIA GeForce RTX 2080 SUPER 0.83%

It took 2 years for the 20 gen to come this far. The 30 gen is not going to be this slow, though. It will be interesting to see how much more it gains in 6 month as gamers get finally their ordered 30 gen cards delivered.

@NickelCoating
Copy link
Author

First: I think raytracing is both very interesting and very much a big part of the future of graphics tech. I want Bevy to support it ... eventually.

That being said, it isn't the present of graphics tech, especially if you account for platforms like mobile and web, but also if you consider that something like 99% of the computers on the planet currently can't raytrace games in real time (I made this number up, feel free to fact check and prove me wrong). And on the platforms/hardware that do support it, its almost without exception a hybrid of traditional techniques and raytracing. This "middle ground" will exist for a long time. Traditional rasterization isn't dead tech, nor will it be for the forsee-able future.

Additionally, Bevy is intentionally modular by design. Most of the things we're building will apply to the raytracing world as well. There is no rush here to double down on raytracing. We won't get left behind.

Well, I think that Bevy should have two graphics pipelines like Unreal/Unity anyways. I think that's what I'm trying to say here. Mobile isn't the same as desktop, that's why Unity/Unreal do have two graphics pipelines. One somewhat basic, or universal that can run on mobile/console, but also beefed up on desktop. And one second that does PBR so is a HD pipeline for desktop and console. That HD pipeline should be a pure path tracer!

Right now Bevy doesn't even have a basic mobile pipeline. By the time it's there, and beefed up so that a console and desktop can make maximum use of it, like for a game like Torchlight perhaps, I don't think that it's worth the time to spend on a HD rasteriser anymore afterwards. Because the expectations for "HD" today, is already in the realm of RT, and not rasterisation.

Regarding hybrids, I'm afraid I don't know many details. I'm not a graphics programmer. An actual graphics programmer should comment on this. But I think it is a transition. We won't be sticking to them, they are only used because their graphics pipelines already existent and 20 gen is slow with RT. But RTX is getting fast with 30 gen. A 3080 can run Quake 2 RTX with 60FPS on 1440p, and that with no DLSS. Quake and Minecraft are full path tracers not hybrids. Granted though, it's still an expensive card that costs 700 euros. Anyways.

I don't know how long it will take to get Bevy its universal graphics pipeline, and when HD will be at least worked on. But in ~2 years there should be the 40 gen release, boosting again RT performance. Perhaps that's a point in time to keep in mind.

@cart
Copy link
Member

cart commented Oct 17, 2020

I agree that long term it probably makes sense to have multiple render pipelines, but short term I would much rather maintain a single pipeline that runs decently everywhere while still looking good when you need it to (ex: google's Filament).

Once the renderer, the platform impls, and the rest of the engine have stabilized a bit, then we can consider splitting pipelines in the interest of optimization and/or render features.

Right now Bevy doesn't even have a basic mobile pipeline

We currently have a pipeline that runs on mobile and doesn't make any major compromises in the interest of super pretty graphics. I'd consider that a "basic mobile pipeline"

@NickelCoating
Copy link
Author

We currently have a pipeline that runs on mobile and doesn't make any major compromises in the interest of super pretty graphics. I'd consider that a "basic mobile pipeline"

Well, yes. I was using the word "basic" as a direct synonym to Unity's "universal". Basic as in not desktop HD. As in, be able to render all three graphics tiers on mobile, too. That what Bevy does is the lowest, correct me if I'm wrong I'm still new to this engine. Highest(e.g. normal maps) would be EVE Echos, Raid: Shadow Legends, Mid(e.g. shadow maps, cube maps, animated foliage) would be Off the Road, Offroad Legends 2. Lowest is Lara Craft Go. I certainly do agree that Bevy should go first and only for a universal pipeline like Unity for now. That's what you end up doing naturally as you slowly increase the feature set, and at some point go beyond the obvious mobile range and into beefed up desktop stuff like Torchlight, like a Unity grade universal pipeline could do.

However, my problem for the future of a potential HD pipeline for desktop/console(mobile excluded) whether it's an official or a custom one, is still that it should be a full path tracer and not a rasteriser or even a hybrid one. For the record again, I'm no real graphics programmer, and we really need a graphics programmer's opinion on this. I just know the benchmarks and read some Nvidia whitepapers/books out of curiosity once in a while.

Hybrids aren't to last in my opinion, they are only transition for now. Frankly, the reason why Control got RT contact shadows, is because that hybrid shadow map tech already came out before even RTX was on the horizon, and 20 gen is a bit slow. Nvidia wrote a whitepaper on Conservative Rasterisation and its use for ray traced shadow maps already in 2015: https://developer.nvidia.com/content/hybrid-ray-traced-shadows

The obvious limitation of that tech is that it requires an actual light entity. It's not emissive lighting. Metro Exodus is the same, it's only a hybrid because they slapped RT in. Metro devs said that themselves, but take that with a grain of salt. I don't have a source, it's hearsay. Those games are out already, in other words, they are outdated in the dev time line. It's only current in the gamer's time line who can buy it now. Their next games will come in 2-3 years, that's about the time for the 40 gen release and they may be fully RT driven, because of the 30 gen solidifying RT over those years with games like Minecraft and their own games starting now.

Also I don't think that the difficulty to run RT with good FPS should distract from a full path tracer. Because it's not that hard. I'm afraid I don't have an RTX card yet, close to nobody got a 30 gen yet thanks to low stock. Concrete numbers aside for a moment. Fact is that Minecraft and Quake 2 are full path tracers with GI always on. GI is very expensive FPS wise. Aside of AAA games, some indies don't even use any GI, which is good for the scalability of an RT renderer. GI off would boost the FPS a lot for those who need it!

For what it's worse, since I don't have an RTX card yet myself. With Q2RTX's GI forced off via console, my GTX 1080ti goes from 60, up to 100-110FPS! Mind that the 1080ti got zero RT cores, so it's a resolution of 800p. It will be interesting to see how an actual RTX cards scales regarding this. Also keep in mind, that unlike Minecraft, Quake doesn't use DLSS so FPS is lost in general whit that particular game even on RTX cards. DLSS didn't make it because of licensing issues with the original Quake 2 source.

I have't found a paper on using Texture Space Shading that uses RT, it's for a fact beneficial for VR. But, that's another method that seem to potentially increase the FPS by selectively only updating lights when they change. Using another method, Nvidia could increase the number of lights done in real time: https://news.developer.nvidia.com/turning-up-the-lights-interactive-path-tracing-scenes-from-a-short-film/

I keep regretting that my career choice isn't graphics programmer. Because of the potential that I can't now elaborate on a solid basis at all. Like using RT hard shadows and smooth them out in screen space similar like a hard shadow map is turned into a soft shadow map in textures space. That's only an idea that a real graphics programmer could investigate. I only know for a fact from Nvidia's book Ray Tracing Gems(RT Gems 2 will be out ~March 2021), that RT hard shadows have the potential to be as fast or even faster than cascade shadow maps on higher 20 gen cards.

But you get the point. There is plenty of optimization that can be/must be done with RT alone. One will have to optimize an HD rasteriser, too. Except by the time one did all that, it will be old already and is stuck with maintaining it and being a rasteriser code base even as a hybrid. Advanced rasterisation methods are harder to implement than ray tracing based methods in addition that game content creation is also longer with rasterisers! This particular fact I know. And should be kept in mind for an desktop/console HD pipeline whether custom or official.

@Dev380
Copy link

Dev380 commented Nov 9, 2020

What about this?
https://clay-rs.github.io/

@NickelCoating
Copy link
Author

Nice find.

Clay project reads "real time" in its description. Seems it's editor grade real time, that accumulates rays over time, though. It's not game engine real time as in 60FPS?

I'm not sure what to think about OpenCL. It's actually possible to do hardware RT, but seems only via intrinsic instructions: https://developer.blender.org/T82557

We would need actual hardware RT to be able to do game engine grade speeds. Even compute on a GPU is too slow.

Personally I don't think that it's ideal to use OpenCL instead of Vulkan/DX12/WGPU. Apple is killing it just like OpenGL, too.

@Dev380
Copy link

Dev380 commented Nov 10, 2020

Nice find.

Clay project reads "real time" in its description. Seems it's editor grade real time, that accumulates rays over time, though. It's not game engine real time as in 60FPS?

I'm not sure what to think about OpenCL. It's actually possible to do hardware RT, but seems only via intrinsic instructions: https://developer.blender.org/T82557

We would need actual hardware RT to be able to do game engine grade speeds. Even compute on a GPU is too slow.

Personally I don't think that it's ideal to use OpenCL instead of Vulkan/DX12/WGPU. Apple is killing it just like OpenGL, too.

Well, the reason why I don't care what "real time" is when clay says it as long as it's around something like 15 fps is because there is this game called Teardown, and it's built with a custom game engine, mainly to support the awesome physics which is the whole point of the game. However, for rendering they used ray tracing, and the game runs as well as a heavy shooter game. Basically, the game just uses really scaled-down ray tracing, not full power ray tracing (so I assume that means smaller sample sizes). And that was CPU ray tracing, Clay uses GPU for all the calculations. I totally agree with you about Vulkan/DX/WGPU being the best, but Teardown worked with only CPU, and even though OpenCL isn't as good as the graphics-specialized counterparts, it's still better than CPU only I guess.

Let me know what you think. Have a great day!

@NickelCoating
Copy link
Author

Teardown is not CPU based

I'm not under the impression that Teardown is CPU based. The reviews show that it's heavy on GPU, not CPU. And when I run it, my CPU (Ryzen 7 3700x) load is only ~14%. Only one thread of my 16 threads is under full load, and two are ~20%, while the rest is close to nothing. The GPU is on 100% load.

Low FPS in Teardown

I find it odd that Teardown is so slow even on 1080ti. Back in the days of the GTX 970, we had Nvidia's voxel based global illumination. I even tested that other Unity plugging made by Sonic Eather. I forgot its name. However, the rates were the same. Except, Teardown doesn't seem to even have global illumination. Seems it only uses camera rays and reflection rays. Seems that it's so heavy because it uses actual volumetric rendering that is not triangle based like Minecraft. Minecraft doesn't do real voxel rendering. Apparently the sampling of volumetric rendering is just slow?

I'm afraid I'm clueless myself why Teardown is so heavy even when nothing breaks. I spotted some voxel edge smoothing, Perhaps it's this kind of stuff that trashes the FPS. Plus the physics that also run on the GPU. But I saw rendering glitches like light leaking that suggests it's indeed volumetric rendering and not triangle based. I'm afraid I can't remember whether the dev mentioned "voxel cone" ray tracing. That would be harder evidence of a real volumetric renderer that doesn't use triangles. Yes, you can sample color info from the voxels to triangles. But that makes only sense when you have a low voxel count but a high triangle count. Teardown doesn't have high res geometry. It does even use per block physics data. So, it's quite volumetric rendering in the end, it seems. Just like Nvidia's Flow API, but with solid physics. The smoke in Teardown used(pre Steam release) to look like this, too: https://developer.nvidia.com/nvidia-flow

Though, the shadows are smooth, seems he may interpolate a lot. That could be another reason why the FPS is so low. You don't see blocky shadows. That's impossible with a per voxel shadow at such a low voxel resolution.

Decoupling form screen space

Basically, the game just uses really scaled-down ray tracing, not full power ray tracing (so I assume that means smaller sample sizes).

Voxels are decoupled from the screen space. Fewer voxels, fewer, computations independent from the screen resolution. Hence the use of low voxel grids sampled to high triangle geometry in Sonic's/Nvidia's voxel based GI. And why I can play on my 1080ti after all with full resolution and good 60FPS unlike in Quake 2 RTX.. There was no texture space shading back then. But, such decoupling from screen space can be done with triangles now, too. But only on Turing GPUs and higher. I guess AMD will follow. It's called texture space shading: https://developer.nvidia.com/blog/texture-space-shading/

However, to decouple from screen space, whether it be via voxels or texture shading, that's indeed perhaps a very fantastic way to increase performance as I mentioned before already a month ago.

Pixel style is more than nostalgia

As already mentioned. A classic style Quake like game, like modern Wrath: Aeon of Ruin, Prodeus, Delver, and Graven could do ray tracing extremely fast because of their pixel style that benefits from screen space decoupling. It's not about nostalgia any more. Or taste in art. It's a practical performance choice that can be very important for indie devs and gamers who don't own a RTX 3090, if I'm right.

The boost comes from the pixel style that is a very low texture resolution. Fewer pixels, fewer computations but still full native screen resolution. This also reduces memory foot prints. While AAA will need massive texture streaming that comes with the new consoles (You don't want off screen data recomputed when it gets back on screen. VRAM ain't enough for AAA games), a pixel style game may not even need that. Decoupled rendering can also refresh at its own slower rate, too, not just lower resolution. Many lights are static in general, not even a refresh is required. And GI is damn expensive, but we don't have to do that per frame any more. As already mentioned. Think of Quake 1 that samples its offline rendered ray traced light maps and that on a freakon CPU single threaded on ancient hardware that got less than 1 giga hertz clock, and with zero GPU help. And that in real time! Yes, my old Intel i5 6600 can do modern resolutions like 1080p in Quake with at least 90FPS with the software renderer.

However, we got today a GPU, upscalling, and massive memory via VRAM/SSD to store/update RT as we need it. We wouldn't even suffer the limitations of light maps that are always static(Position/size is static. Not light color/intensity). Which should mean that even a fully dynamic game like Minecraft could make use of this. Plus, proper GI. If I recall not even Quake 3 could do proper GI(even if it would be very static). Many small emissive lights (think of the long thin lights at stair steps) would't wreck the FPS either. They would turn static at distance.

@Dev380
Copy link

Dev380 commented Nov 11, 2020

Oh, my bad, I thought I read somewhere that Teardown uses CPU ray tracing but I can't find anyone saying that. But we might have to wait awhile for ray tracing standards to be set before integrating in hardware RT. But we can still use "lite" ray tracing. For example, only one game (Metro Exodus) in the original set of Nvidia RTX games uses full ray tracing, the other games only use it for stuff like shadows or reflections. Personally, I think PBR is enough to make games look good, and shadows without RT are good enough, but I would really like to see an option to use a ray tracing engine without RT hardware only for rendering reflections.

@NickelCoating
Copy link
Author

but I would really like to see an option to use a ray tracing engine without RT hardware only for rendering reflections.

Well, by the time anything is implemented regarding RT, lots of people are going to have RT hardware. Leaks suggest that there is going to be a RTX 3050. No GTX for this gen. With this gen GTX may go extinct. But yeah, current problem is that WGPU doesn't support ray tracing. Whatever you do now will have to be done via compute.

For what it's worth though, in case I sound too rushy with RT. Before anyone does RT implementation it should be a good choice to wait until Ray Tracing Gems 2 is out. Seems they pushed this book back to August 2021: http://www.realtimerendering.com/raytracinggems/rtg2/index.html

There would be a ton to learn from that book. There will be a "Efficiency and best practices" topic. Perhaps it can demonstrate the use of texture space shading. Ray Tracing Gems 1 didn't touch that topic. But for the record, I'm not under the impression that those two books teach any compute based ray tracing. It's all hardware via DXR(DX12)/VKR(Vulkan).

@ZainlessBrombie
Copy link

Ray Tracing gems is out, and there was silence here for over a year -
Does anyone have an idea on how this issue should proceed? This seems like an important thing :)

@bjorn3
Copy link
Contributor

bjorn3 commented Jan 3, 2022

Wgpu doesn't yet support raytracing. That will need to be implemented first.

@novacrazy
Copy link

With ReSTIR making replacing rasterization entirely with real-time ray-tracing totally feasible, would be cool to see progress on this.

@JMS55
Copy link
Contributor

JMS55 commented Apr 17, 2023

I'm working on a hardware-RT based global illumination renderer for bevy https://github.com/JMS55/bevy/tree/solari/crates/bevy_solari/src.

Currently I have all the foundational stuff necessary for using raytracing (BLAS builds, TLAS build, texture/vertex/index buffer binding arrays, etc).

Soon I'll start on the actual GI-rendering stuff. This is a long term/experimental project, so don't expect anything usable soon. But for those interested, it's being worked on :)

@JMS55
Copy link
Contributor

JMS55 commented Oct 3, 2023

An update a couple months later: Still working on raytraced GI. It's a very difficult and cutting-edge topic. I've opened a draft PR here: #10000 for those interested. Still lots of work to be done, especially in wgpu.

@theHamsta
Copy link

Since it was not mentioned yet. There is n open request to for rayqueries for Vulkan for wgpu gfx-rs/wgpu#3507 outside of the webgpu standard. A similar path could also be implemented for d3d12 or Metal. But I assume will take some while until ray tracing will be available for we GPU .

@MiniaczQ
Copy link
Contributor

MiniaczQ commented Dec 14, 2023

It was merged last week

EDIT: the PR in question

@JMS55
Copy link
Contributor

JMS55 commented Dec 14, 2023

Not quite. What got merged was the wgpu-hal-VK support for ray tracing. Wgpu-hal is essentially a very thin wrapper over Vulkan/DX12/Metal. What still needs to be done on wgpu's end:

  • Support at the wgpu-core/wgpu level (the actual public APIs that track object lifetimes and perform validation)
  • Ideally support for BLAS compaction
  • Ideally better support for bindless (binding arrays)

We're one step closer now, though :)

@Sorseg
Copy link
Contributor

Sorseg commented Nov 4, 2024

Relevant wgpu issue gfx-rs/wgpu#1040

@BenjaminBrienen BenjaminBrienen added D-Complex Quite challenging from either a design or technical perspective. Ask for help! S-Needs-Design This issue requires design work to think about how it would best be accomplished labels Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible D-Complex Quite challenging from either a design or technical perspective. Ask for help! S-Needs-Design This issue requires design work to think about how it would best be accomplished
Projects
None yet
Development

No branches or pull requests

15 participants