Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project: Performant visualization of scenes with large number of entities #8233

Open
11 tasks
teh-cmc opened this issue Nov 27, 2024 · 1 comment
Open
11 tasks
Labels
🚀 performance Optimization, memory use, etc project Tracking issues for so-called "Projects"

Comments

@teh-cmc
Copy link
Member

teh-cmc commented Nov 27, 2024

Context

We want the viewer to be able to scale to scenes with large numbers of entities.
This of course means visualizing these scenes, but also ingesting them in the first place.

This is blocked on a number of specific implementation issues, but put broadly: the work the viewer has to do to layout a scene more often than not grows linearly with the number of entities present in the entire dataset.

There are only two ways to combat this:

  • Break the linear growth where possible (e.g. only compute the data for the visible part of a list as opposed to the entire list).
  • (Incremental) caching where possible (e.g. incrementally cache and update the transform hierarchy).

Of course in many cases, option 1 isn't even an option: if the user wants to visualize all entities in the scene, then somehow we have to make that fast.

Incremental caching of aggregated data (which is what the visualizers work with) is very hard, but will be a must in order to reach our performance goals.

This issue is not about:

  • Making the UX work for large number of entities (search, etc).

Measurable end goals

Air Traffic example (2h dataset)

  • Ingestion directly from the SDK should be bottlenecked by the SDK, always.
  • Ingestion from file should be order of magnitude faster, in the single digit seconds realm.
  • Should be able to visualize, without visible time range, at or close to 60FPS on any decent machine, on the web.
  • Should be able to visualize, with infinite visible time range, to run at 60FPS on high-end machines, on native.

TODO(cmc): What should we do about plotting? Is plotting 10k lines on a single plot really an important use case? If so, do we need to bring egui issues into this?

Revy

Revy was infamously bottlenecked by the performance of many entities (game scenes have a lot of them).
This is a good opportunity to revive that project, if we can make it happen.

  • Should be able to ingest and visualize Bevy's "Alien Cake Addict" example in real-time, on a decent machine, on the web.

Relevant material

Writings:

PRs:

Areas that need significant improvements

Wherever we don't do something obviously silly, we should strive to go with a retained/cached approach in order to get more scalable and robust against the per-frame regressions in trivially looking (== static frame) scenarios
If this is structurally hard, revisit structure!

@teh-cmc teh-cmc added 🚀 performance Optimization, memory use, etc project Tracking issues for so-called "Projects" labels Nov 27, 2024
@teh-cmc
Copy link
Member Author

teh-cmc commented Dec 2, 2024

What does performance look like when we only do the querying + chunk processing part? I.e. let's pretend we've managed to optimized out everything that visualizers do after receiving the chunks (range-zipping, color/radii/etc splatting, annotation context...).

We need to know this because our upcoming aggregated caching plan still involves running the queries every frame.

I've applied the following patch, which (very broadly) simulates that:

Click for diff
diff --git a/crates/viewer/re_space_view/src/results_ext.rs b/crates/viewer/re_space_view/src/results_ext.rs
index 55e52be3e00..01d5cae4e17 100644
--- a/crates/viewer/re_space_view/src/results_ext.rs
+++ b/crates/viewer/re_space_view/src/results_ext.rs
@@ -427,7 +427,7 @@ impl<'a> HybridResultsChunkIter<'a> {
     pub fn component<C: re_types_core::Component>(
         &'a self,
     ) -> impl Iterator<Item = ((TimeInt, RowId), ChunkComponentIterItem<C>)> + 'a {
-        self.chunks.iter().flat_map(move |chunk| {
+        self.chunks.iter().filter(|_| false).flat_map(move |chunk| {
             itertools::izip!(
                 chunk.iter_component_indices(&self.timeline, &self.component_name),
                 chunk.iter_component::<C>(),
@@ -441,7 +441,7 @@ impl<'a> HybridResultsChunkIter<'a> {
     pub fn primitive<T: arrow2::types::NativeType>(
         &'a self,
     ) -> impl Iterator<Item = ((TimeInt, RowId), &'a [T])> + 'a {
-        self.chunks.iter().flat_map(move |chunk| {
+        self.chunks.iter().filter(|_| false).flat_map(move |chunk| {
             itertools::izip!(
                 chunk.iter_component_indices(&self.timeline, &self.component_name),
                 chunk.iter_primitive::<T>(&self.component_name)
@@ -458,7 +458,7 @@ impl<'a> HybridResultsChunkIter<'a> {
     where
         [T; N]: bytemuck::Pod,
     {
-        self.chunks.iter().flat_map(move |chunk| {
+        self.chunks.iter().filter(|_| false).flat_map(move |chunk| {
             itertools::izip!(
                 chunk.iter_component_indices(&self.timeline, &self.component_name),
                 chunk.iter_primitive_array::<N, T>(&self.component_name)
@@ -475,7 +475,7 @@ impl<'a> HybridResultsChunkIter<'a> {
     where
         [T; N]: bytemuck::Pod,
     {
-        self.chunks.iter().flat_map(move |chunk| {
+        self.chunks.iter().filter(|_| false).flat_map(move |chunk| {
             itertools::izip!(
                 chunk.iter_component_indices(&self.timeline, &self.component_name),
                 chunk.iter_primitive_array_list::<N, T>(&self.component_name)
@@ -489,7 +489,7 @@ impl<'a> HybridResultsChunkIter<'a> {
     pub fn string(
         &'a self,
     ) -> impl Iterator<Item = ((TimeInt, RowId), Vec<re_types_core::ArrowString>)> + 'a {
-        self.chunks.iter().flat_map(|chunk| {
+        self.chunks.iter().filter(|_| false).flat_map(|chunk| {
             itertools::izip!(
                 chunk.iter_component_indices(&self.timeline, &self.component_name),
                 chunk.iter_string(&self.component_name)
@@ -503,7 +503,7 @@ impl<'a> HybridResultsChunkIter<'a> {
     pub fn buffer<T: arrow::datatypes::ArrowNativeType + arrow2::types::NativeType>(
         &'a self,
     ) -> impl Iterator<Item = ((TimeInt, RowId), Vec<re_types_core::ArrowBuffer<T>>)> + 'a {
-        self.chunks.iter().flat_map(|chunk| {
+        self.chunks.iter().filter(|_| false).flat_map(|chunk| {
             itertools::izip!(
                 chunk.iter_component_indices(&self.timeline, &self.component_name),
                 chunk.iter_buffer(&self.component_name)
diff --git a/crates/viewer/re_space_view_spatial/src/visualizers/utilities/entity_iterator.rs b/crates/viewer/re_space_view_spatial/src/visualizers/utilities/entity_iterator.rs
index 0e535138677..3ba2cd3a26f 100644
--- a/crates/viewer/re_space_view_spatial/src/visualizers/utilities/entity_iterator.rs
+++ b/crates/viewer/re_space_view_spatial/src/visualizers/utilities/entity_iterator.rs
@@ -141,7 +141,7 @@ pub fn iter_component<'a, C: re_types::Component>(
     timeline: Timeline,
     component_name: ComponentName,
 ) -> impl Iterator<Item = ((TimeInt, RowId), ChunkComponentIterItem<C>)> + 'a {
-    chunks.iter().flat_map(move |chunk| {
+    chunks.iter().filter(|_| false).flat_map(move |chunk| {
         itertools::izip!(
             chunk.iter_component_indices(&timeline, &component_name),
             chunk.iter_component::<C>()
@@ -158,7 +158,7 @@ pub fn iter_primitive<'a, T: arrow2::types::NativeType>(
     timeline: Timeline,
     component_name: ComponentName,
 ) -> impl Iterator<Item = ((TimeInt, RowId), &'a [T])> + 'a {
-    chunks.iter().flat_map(move |chunk| {
+    chunks.iter().filter(|_| false).flat_map(move |chunk| {
         itertools::izip!(
             chunk.iter_component_indices(&timeline, &component_name),
             chunk.iter_primitive::<T>(&component_name)
@@ -178,7 +178,7 @@ pub fn iter_primitive_array<'a, const N: usize, T: arrow2::types::NativeType>(
 where
     [T; N]: bytemuck::Pod,
 {
-    chunks.iter().flat_map(move |chunk| {
+    chunks.iter().filter(|_| false).flat_map(move |chunk| {
         itertools::izip!(
             chunk.iter_component_indices(&timeline, &component_name),
             chunk.iter_primitive_array::<N, T>(&component_name)
@@ -198,7 +198,7 @@ pub fn iter_primitive_array_list<'a, const N: usize, T: arrow2::types::NativeTyp
 where
     [T; N]: bytemuck::Pod,
 {
-    chunks.iter().flat_map(move |chunk| {
+    chunks.iter().filter(|_| false).flat_map(move |chunk| {
         itertools::izip!(
             chunk.iter_component_indices(&timeline, &component_name),
             chunk.iter_primitive_array_list::<N, T>(&component_name)
@@ -215,7 +215,7 @@ pub fn iter_string<'a>(
     timeline: Timeline,
     component_name: ComponentName,
 ) -> impl Iterator<Item = ((TimeInt, RowId), Vec<re_types::ArrowString>)> + 'a {
-    chunks.iter().flat_map(move |chunk| {
+    chunks.iter().filter(|_| false).flat_map(move |chunk| {
         itertools::izip!(
             chunk.iter_component_indices(&timeline, &component_name),
             chunk.iter_string(&component_name)
@@ -232,7 +232,7 @@ pub fn iter_buffer<'a, T: arrow::datatypes::ArrowNativeType + arrow2::types::Nat
     timeline: Timeline,
     component_name: ComponentName,
 ) -> impl Iterator<Item = ((TimeInt, RowId), Vec<re_types::ArrowBuffer<T>>)> + 'a {
-    chunks.iter().flat_map(move |chunk| {
+    chunks.iter().filter(|_| false).flat_map(move |chunk| {
         itertools::izip!(
             chunk.iter_component_indices(&timeline, &component_name),
             chunk.iter_buffer(&component_name)

First, let's look at unmodified main (on my machine with discrete GPU, i.e. hard mode):

main, latest-at, without plot:
image

main, latest-at, with plot:
image

main, infinite range for 3D view / latest-at for the rest, without plot:
image

main, infinite range for 3D view / latest-at for the rest, with plot:
image


Now here's where it gets interesting: consider what happens

main, latest-at, without plot -- Chunk processing only:
image
image
(NOTE: The specific values is the flamegraph are inflated due to probing overhead).

main, infinite range for 3D view / latest-at for the rest, without plot -- Chunk processing only:
image


It looks like we can definitely afford to run the queries every frame, as long as we manage to make aggregated caching work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🚀 performance Optimization, memory use, etc project Tracking issues for so-called "Projects"
Projects
None yet
Development

No branches or pull requests

1 participant