Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic benchmark #69

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Conversation

CarsonBurke
Copy link
Contributor

@CarsonBurke CarsonBurke commented Aug 30, 2024

Includes changes from #66
resolves #67

I get the following warning. It seems I didn't quite setup the cameras properly or something? If someone knows what's up please let me know or fix it.

2024-08-30T22:41:11.607981Z  WARN bevy_render::camera::camera: Camera order ambiguities detected for active cameras with the following priorities: {(1, Some(Window(NormalizedWindowRef(Entity { index: 0, generation: 1 }))))}. To fix this, ensure there is exactly one Camera entity spawned with a given order for a given RenderTarget. Ambiguities should be resolved because either (1) multiple active cameras were spawned accidentally, which will result in rendering multiple instances of the scene or (2) for cases where multiple active cameras is intentional, ambiguities could result in unpredictable render results.

Copy link

trunk-io bot commented Aug 30, 2024

Merging to main in this repository is managed by Trunk.

  • To merge this pull request, check the box to the left or comment /trunk merge below.

@tigerplush
Copy link
Collaborator

I don't think this is what #67 proposes, therefore it does not have my approval.

Also, this is just a hardcopy of the krypta example with the diagnostics enabled, I think it would be better to add a "benchmark" or "diagnostics" feature and conditionally compile the FrameTimeDiagnostics Plugin for all examples

@CarsonBurke
Copy link
Contributor Author

This is not a hardcopy of Krypta with diagnostics enabled. I did quite a bit of code reuse to save time in developing this benchmark. It does not have moving lights or occluders, but I'm not sure that would have any utility in a benchmark for this library; #67 was mostly spitting out ideas.

The benchmark procedurally generates a 100x100 map filled with randomly placed occluding walls and light-emitting candles. Some candles jitter and some do not. I was partway through working on a system to delete and add candles every few seconds, but I believe that deserves its own benchmark as a test of how fast occluders and lights are created and destroyed, if that is even necessary.
image
image
image

This benchmark provides tangible utility. The #65 regression would have been identified if this benchmark had been ran with the changes, as the map that is generated is large enough as to have many lights and occluders. In my own tests on the regression build, I get around 30 FPS, while I get significantly more, closer to 165, using the fixed build.

Moreover, improvements like JFA can easily be tested for performance with this benchmark.

If you think there should be changes, what would you like them to be @tigerplush?

@CarsonBurke
Copy link
Contributor Author

In regards to adding a benchmark or diagnostics feature to the examples, I think this is rather pointless. There are not enough lights, and the maps are too small in the other examples to provide any real value or performance difficulties. Krypta and the other two always run at 165 FPS (max monitor refresh rate for me) even with the #65 regression (which didn't really impact those, as nearly if not all lights are always in view)

@zaycev
Copy link
Owner

zaycev commented Sep 3, 2024

Will try to find time to review this week.

@CarsonBurke
Copy link
Contributor Author

Will try to find time to review this week.

Bump

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Benchmark
3 participants