Enum re_chunk_store::ChunkStoreError
source · pub enum ChunkStoreError {
UnsortedChunk,
- Chunk(ChunkError),
+ Chunk(ChunkError),
ParseConfig {
name: &'static str,
value: String,
err: Box<dyn Error + Send + Sync>,
},
-}
Variants§
Trait Implementations§
source§impl Debug for ChunkStoreError
impl Debug for ChunkStoreError
source§impl Display for ChunkStoreError
impl Display for ChunkStoreError
source§impl Error for ChunkStoreError
impl Error for ChunkStoreError
source§fn source(&self) -> Option<&(dyn Error + 'static)>
fn source(&self) -> Option<&(dyn Error + 'static)>
1.0.0 · source§fn description(&self) -> &str
fn description(&self) -> &str
source§impl From<ChunkError> for ChunkStoreError
impl From<ChunkError> for ChunkStoreError
source§fn from(source: ChunkError) -> Self
fn from(source: ChunkError) -> Self
Auto Trait Implementations§
impl Freeze for ChunkStoreError
impl !RefUnwindSafe for ChunkStoreError
impl Send for ChunkStoreError
impl Sync for ChunkStoreError
impl Unpin for ChunkStoreError
impl !UnwindSafe for ChunkStoreError
Blanket Implementations§
source§impl<T> Any for Twhere
+}Variants§
Trait Implementations§
source§impl Debug for ChunkStoreError
source§impl Display for ChunkStoreError
source§impl Error for ChunkStoreError
source§fn source(&self) -> Option<&(dyn Error + 'static)>
The lower-level source of this error, if any. Read more1.0.0 · source§fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string()Auto Trait Implementations§
§impl Freeze for ChunkStoreError
§impl !RefUnwindSafe for ChunkStoreError
§impl Send for ChunkStoreError
§impl Sync for ChunkStoreError
§impl Unpin for ChunkStoreError
§impl !UnwindSafe for ChunkStoreError
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/index.html b/docs/rust/head/re_chunk_store/events/index.html
index 39b24edb7cf5..ec400cba1c76 100644
--- a/docs/rust/head/re_chunk_store/events/index.html
+++ b/docs/rust/head/re_chunk_store/events/index.html
@@ -1 +1 @@
-re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
+re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
index f926911cbf88..433db0f67560 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
index 59cd9a05c749..dad7a7babd01 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Any for Twhere
+}Variants§
Trait Implementations§
source§impl Debug for ChunkStoreError
source§impl Display for ChunkStoreError
source§impl Error for ChunkStoreError
source§fn source(&self) -> Option<&(dyn Error + 'static)>
The lower-level source of this error, if any. Read more1.0.0 · source§fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string()Auto Trait Implementations§
§impl Freeze for ChunkStoreError
§impl !RefUnwindSafe for ChunkStoreError
§impl Send for ChunkStoreError
§impl Sync for ChunkStoreError
§impl Unpin for ChunkStoreError
§impl !UnwindSafe for ChunkStoreError
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/index.html b/docs/rust/head/re_chunk_store/events/index.html
index 39b24edb7cf5..ec400cba1c76 100644
--- a/docs/rust/head/re_chunk_store/events/index.html
+++ b/docs/rust/head/re_chunk_store/events/index.html
@@ -1 +1 @@
-re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
+re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
index f926911cbf88..433db0f67560 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
index 59cd9a05c749..dad7a7babd01 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Variants§
Trait Implementations§
source§impl Debug for ChunkStoreError
impl Debug for ChunkStoreError
source§impl Display for ChunkStoreError
impl Display for ChunkStoreError
source§impl Error for ChunkStoreError
impl Error for ChunkStoreError
source§fn source(&self) -> Option<&(dyn Error + 'static)>
fn source(&self) -> Option<&(dyn Error + 'static)>
1.0.0 · source§fn description(&self) -> &str
fn description(&self) -> &str
Auto Trait Implementations§
impl Freeze for ChunkStoreError
impl !RefUnwindSafe for ChunkStoreError
impl Send for ChunkStoreError
impl Sync for ChunkStoreError
impl Unpin for ChunkStoreError
impl !UnwindSafe for ChunkStoreError
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/index.html b/docs/rust/head/re_chunk_store/events/index.html
index 39b24edb7cf5..ec400cba1c76 100644
--- a/docs/rust/head/re_chunk_store/events/index.html
+++ b/docs/rust/head/re_chunk_store/events/index.html
@@ -1 +1 @@
-re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
+re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
index f926911cbf88..433db0f67560 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
index 59cd9a05c749..dad7a7babd01 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/index.html b/docs/rust/head/re_chunk_store/events/index.html
index 39b24edb7cf5..ec400cba1c76 100644
--- a/docs/rust/head/re_chunk_store/events/index.html
+++ b/docs/rust/head/re_chunk_store/events/index.html
@@ -1 +1 @@
-re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
+re_chunk_store::events - Rust Module re_chunk_store::events
source · Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
index f926911cbf88..433db0f67560 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
index 59cd9a05c749..dad7a7babd01 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Module re_chunk_store::events
source · Structs§
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
Module re_chunk_store::events
source · Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
.
Enums§
- Is it an addition or a deletion?
Struct re_chunk_store::events::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>, + pub new_chunk: Arc<Chunk>, +}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
fn clone(&self) -> ChunkCompactionReport
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkCompactionReport
impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
impl Freeze for ChunkCompactionReport
impl !RefUnwindSafe for ChunkCompactionReport
impl Send for ChunkCompactionReport
impl Sync for ChunkCompactionReport
impl Unpin for ChunkCompactionReport
impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
index 59cd9a05c749..dad7a7babd01 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
index 59cd9a05c749..dad7a7babd01 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store::events - Rust Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Struct re_chunk_store::events::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ - Similarly, garbage collection always removes all the data associated with a chunk in one go: there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
ChunkId
]s being unique and non-reusable.
chunk: Arc<Chunk>
The chunk that was added or removed.
+chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is what will be exposed here. This allows subscribers to only process data that is new, as opposed to having to reprocess old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
impl ChunkStoreDiff
pub fn addition( - chunk: Arc<Chunk>, + chunk: Arc<Chunk>, compacted: Option<ChunkCompactionReport>, -) -> Self
pub fn deletion(chunk: Arc<Chunk>) -> Self
pub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
fn clone(&self) -> ChunkStoreDiff
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkStoreDiff
impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
impl PartialEq for ChunkStoreDiff
impl Eq for ChunkStoreDiff
Auto Trait Implementations§
impl Freeze for ChunkStoreDiff
impl !RefUnwindSafe for ChunkStoreDiff
impl Send for ChunkStoreDiff
impl Sync for ChunkStoreDiff
impl Unpin for ChunkStoreDiff
impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
index e2912206bae8..7b696a90c0c2 100644
--- a/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/events/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
+re_chunk_store::external - Rust Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
index a40c03ed50a7..38d19abf3767 100644
--- a/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/gc/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
+
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
and ComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on,
expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each unique StoreId
.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
index b0541f045337..1a32a5289c2a 100644
--- a/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/stats/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/external/index.html b/docs/rust/head/re_chunk_store/external/index.html
index 506ec67a4aa0..6e14f17547b7 100644
--- a/docs/rust/head/re_chunk_store/external/index.html
+++ b/docs/rust/head/re_chunk_store/external/index.html
@@ -1 +1 @@
-
Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
Module re_chunk_store::external
source · Re-exports§
pub use arrow2;
pub use re_chunk;
The default is an unbounded time budget (i.e. throughput only).
protect_latest: usize
How many component revisions to preserve on each timeline.
protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
impl GarbageCollectionOptions
pub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
pub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
impl GarbageCollectionOptions
pub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
pub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
fn clone(&self) -> GarbageCollectionOptions
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl Freeze for GarbageCollectionOptions
impl RefUnwindSafe for GarbageCollectionOptions
impl Send for GarbageCollectionOptions
impl Sync for GarbageCollectionOptions
impl Unpin for GarbageCollectionOptions
impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
index 16f1f4b9cb59..81be7f1eac1a 100644
--- a/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
+++ b/docs/rust/head/re_chunk_store/gc/type.RemovableChunkIdPerTimePerComponentPerTimelinePerEntity.html
@@ -1 +1 @@
-RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust
\ No newline at end of file
+RemovableChunkIdPerTimePerComponentPerTimelinePerEntity in re_chunk_store::gc - Rust pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/index.html b/docs/rust/head/re_chunk_store/index.html
index 86381627f119..82daf495fa2e 100644
--- a/docs/rust/head/re_chunk_store/index.html
+++ b/docs/rust/head/re_chunk_store/index.html
@@ -13,6 +13,6 @@ §Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
pub type RemovableChunkIdPerTimePerComponentPerTimelinePerEntity = IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>;
Aliased Type§
struct RemovableChunkIdPerTimePerComponentPerTimelinePerEntity { /* private fields */ }
§Feature
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.
-
deadlock_detection
— Enables parking_lot
’s deadlock detection background thread.Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
andComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on, +
Re-exports§
pub use re_chunk::Chunk;
pub use re_chunk::ChunkId;
pub use re_chunk::LatestAtQuery;
pub use re_chunk::RangeQuery;
pub use re_chunk::RangeQueryOptions;
pub use re_chunk::RowId;
pub use re_log_types::ResolvedTimeRange;
pub use re_log_types::TimeInt;
pub use re_log_types::TimeType;
pub use re_log_types::Timeline;
Modules§
Structs§
- Reports which [
Chunk
]s were merged into a new [Chunk
] during a compaction. - A complete chunk store: covers all timelines, all entities, everything.
- Stats about a collection of chunks.
- Describes an atomic change in the Rerun
ChunkStore
: a chunk has been added or deleted. - The atomic unit of change in the Rerun
ChunkStore
. - Incremented on each edit.
- A ref-counted, inner-mutable handle to a
ChunkStore
. - Describes a data/component column, such as
Position3D
. - Select a component based on its
EntityPath
andComponentName
. - Describes a complete query for Rerun’s dataframe API.
- Describes a time column, such as
log_time
. - Select a time column.
- The view contents specify which subset of the database (i.e., which columns) the query runs on, expressed as a set of
EntityPath
s and their associated [ComponentName
]s.
Enums§
- Is it an addition or a deletion?
- Describes a column selection to return as part of a query.
- Specifies how null values should be filled in the returned dataframe.
Traits§
- A
ChunkStoreSubscriber
that is instantiated for each uniqueStoreId
.
Type Aliases§
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
impl ChunkStoreChunkStats
pub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
fn add_assign(&mut self, rhs: Self)
+=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
fn clone(&self) -> ChunkStoreChunkStats
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkStoreChunkStats
impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
fn default() -> ChunkStoreChunkStats
source§impl Display for ChunkStoreChunkStats
impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
fn sub_assign(&mut self, rhs: Self)
-=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
impl ChunkStoreChunkStats
pub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
fn add_assign(&mut self, rhs: Self)
+=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
fn clone(&self) -> ChunkStoreChunkStats
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkStoreChunkStats
impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
fn default() -> ChunkStoreChunkStats
source§impl Display for ChunkStoreChunkStats
impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
fn sub_assign(&mut self, rhs: Self)
-=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
impl Sum for ChunkStoreChunkStats
impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
impl Freeze for ChunkStoreChunkStats
impl RefUnwindSafe for ChunkStoreChunkStats
impl Send for ChunkStoreChunkStats
impl Sync for ChunkStoreChunkStats
impl Unpin for ChunkStoreChunkStats
impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
index c8529958e388..0bc8146a0437 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkIdSetPerTime.html
@@ -1,30 +1,30 @@
ChunkIdSetPerTime in re_chunk_store::store - Rust Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in
latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
-§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
+§per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the first timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
-§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
+§per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this
-Chunk
contains data for this particular component on this particular timeline (see
-Chunk::time_range_per_component
).
-- For an
(entity, timeline)
index, that would be the last timestamp at which this Chunk
-contains data for any component on this particular timeline (see re_chunk::TimeColumn::time_range
).
+[Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+- For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
]
+contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for ChunkIdSetPerTime
§impl RefUnwindSafe for ChunkIdSetPerTime
§impl Send for ChunkIdSetPerTime
§impl Sync for ChunkIdSetPerTime
§impl Unpin for ChunkIdSetPerTime
§impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Struct re_chunk_store::store::ChunkIdSetPerTime
source · pub struct ChunkIdSetPerTime {
pub(crate) max_interval_length: u64,
- pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
- pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
+ pub(crate) per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>,
}
Fields§
§max_interval_length: u64
Keeps track of the longest interval being currently stored in the two maps below.
This is used to bound the backwards linear walk when looking for overlapping chunks in latest-at queries.
See ChunkStore::latest_at
implementation comments for more details.
per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific start time.
per_start_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific start time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the first timestamp at which this -Chunk
contains data for this particular component on this particular timeline (see -Chunk::time_range_per_component
).
- - For an
(entity, timeline)
index, that would be the first timestamp at which thisChunk
-contains data for any component on this particular timeline (seere_chunk::TimeColumn::time_range
).
+[ - For an
(entity, timeline)
index, that would be the first timestamp at which this [Chunk
] +contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
ChunkId
s organized by their most specific end time.
per_end_time: BTreeMap<TimeInt, BTreeSet<ChunkId>>
[ChunkId
]s organized by their most specific end time.
What “most specific” means depends on the context in which the ChunkIdSetPerTime
was instantiated, e.g.:
- For an
(entity, timeline, component)
index, that would be the last timestamp at which this -Chunk
contains data for this particular component on this particular timeline (see -Chunk::time_range_per_component
).
- - For an
(entity, timeline)
index, that would be the last timestamp at which thisChunk
-contains data for any component on this particular timeline (seere_chunk::TimeColumn::time_range
).
+[ - For an
(entity, timeline)
index, that would be the last timestamp at which this [Chunk
] +contains data for any component on this particular timeline (see [re_chunk::TimeColumn::time_range
]).
Chunk
] contains data for this particular component on this particular timeline (see
+[Chunk::time_range_per_component
]).
+Trait Implementations§
source§impl Clone for ChunkIdSetPerTime
impl Clone for ChunkIdSetPerTime
source§fn clone(&self) -> ChunkIdSetPerTime
fn clone(&self) -> ChunkIdSetPerTime
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkIdSetPerTime
impl Debug for ChunkIdSetPerTime
source§impl Default for ChunkIdSetPerTime
impl Default for ChunkIdSetPerTime
source§fn default() -> ChunkIdSetPerTime
fn default() -> ChunkIdSetPerTime
Auto Trait Implementations§
impl Freeze for ChunkIdSetPerTime
impl RefUnwindSafe for ChunkIdSetPerTime
impl Send for ChunkIdSetPerTime
impl Sync for ChunkIdSetPerTime
impl Unpin for ChunkIdSetPerTime
impl UnwindSafe for ChunkIdSetPerTime
Blanket Implementations§
source§impl<T> Az for T
impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
index b2fd5a8a0512..812f0fa07a6b 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
index e9aef68f0ae8..0e66dee09af3 100644
--- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
index 1a28abb54829..cef3b1a5088c 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html
@@ -1 +1 @@
-ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentName in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
index 30081bdac69d..415ce9aa30f1 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentNamePerEntity.html
@@ -1 +1 @@
-ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
+ChunkIdPerComponentNamePerEntity in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
index 24b0b7674f58..1e9fc0458a50 100644
--- a/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
+++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdSet.html
@@ -1 +1 @@
-ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
+ChunkIdSet in re_chunk_store::store - Rust Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
index 0aa9eda6241c..7b24078ef90a 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkCompactionReport.html
@@ -1,9 +1,9 @@
ChunkCompactionReport in re_chunk_store - Rust Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub new_chunk: Arc<Chunk>,
+}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
+Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+§new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
§impl Freeze for ChunkCompactionReport
§impl !RefUnwindSafe for ChunkCompactionReport
§impl Send for ChunkCompactionReport
§impl Sync for ChunkCompactionReport
§impl Unpin for ChunkCompactionReport
§impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Keeps track of the latest datatype information for all component types that have been written to the store so far.
See also Self::lookup_datatype
.
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-Accumulated size statitistics for all temporal Chunk
s currently present in the store.
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-Static data. Never garbage collected.
+Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-Accumulated size statitistics for all static Chunk
s currently present in the store.
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
Monotonically increasing ID for insertions.
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@ ) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations component on each timeline. The only practical guarantee this gives is that a latest-at query with a value of max-int will be unchanged. However, latest-at queries from other arbitrary points in time may provide different results pre- and post- GC. -
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
fn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
fn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
fn gc_drop_at_least_num_bytes( &mut self, options: &GarbageCollectionOptions, num_bytes_to_drop: f64, - protected_chunk_ids: &BTreeSet<ChunkId>, -) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
pub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
pub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
pub(crate) fn remove_chunks( &mut self, - chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>, + chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>, time_budget: Option<(Instant, Duration)>, -) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
§Limitations
This ignores static data.
source§impl ChunkStore
impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
pub fn latest_at_relevant_chunks( &self, - query: &LatestAtQuery, + query: &LatestAtQuery, entity_path: &EntityPath, component_name: ComponentName, -) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
pub fn latest_at_relevant_chunks_for_all_components( &self, - query: &LatestAtQuery, + query: &LatestAtQuery, entity_path: &EntityPath, -) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
fn latest_at( &self, - query: &LatestAtQuery, + query: &LatestAtQuery, temporal_chunk_ids_per_time: &ChunkIdSetPerTime, -) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
pub fn range_relevant_chunks( +) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
pub fn range_relevant_chunks( &self, - query: &RangeQuery, + query: &RangeQuery, entity_path: &EntityPath, component_name: ComponentName, -) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
pub fn range_relevant_chunks_for_all_components( &self, - query: &RangeQuery, + query: &RangeQuery, entity_path: &EntityPath, -) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
fn range<'a>( &'a self, - query: &RangeQuery, + query: &RangeQuery, temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>, -) -> Vec<Arc<Chunk>>
source§impl ChunkStore
impl ChunkStore
pub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
impl ChunkStore
pub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
pub fn entity_stats_static( &self, entity_path: &EntityPath, ) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@§Limitations
pub fn id(&self) -> StoreId
pub fn set_info(&mut self, info: StoreInfo)
pub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
pub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
pub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
pub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
pub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
sourcepub fn num_chunks(&self) -> usize
pub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
pub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
pub fn from_rrd_filepath( store_config: &ChunkStoreConfig, path_to_rrd: impl AsRef<Path>, - version_policy: VersionPolicy, + version_policy: VersionPolicy, ) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@§Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
pub fn handle_from_rrd_filepath( store_config: &ChunkStoreConfig, path_to_rrd: impl AsRef<Path>, - version_policy: VersionPolicy, + version_policy: VersionPolicy, ) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@§Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
pub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
pub fn insert_chunk( &mut self, - chunk: &Arc<Chunk>, -) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-
-
- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
- - Inserting a duplicated
ChunkId
will result in a no-op.
- - Inserting an empty
Chunk
will result in a no-op.
+ - Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+ - Inserting a duplicated [
ChunkId
] will result in a no-op.
+ - Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
fn find_and_elect_compaction_candidate( &self, - chunk: &Arc<Chunk>, -) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines and components – it will have many direct neighbors. diff --git a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html index e9aef68f0ae8..0e66dee09af3 100644 --- a/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html +++ b/docs/rust/head/re_chunk_store/store/struct.ChunkStoreConfig.html @@ -7,7 +7,7 @@ any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good. @@ -20,7 +20,7 @@ which is the default row threshold. The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g. sending chunks over the network. -
chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@ which is the default row threshold. The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g. sending chunks over the network. -chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html index 1a28abb54829..cef3b1a5088c 100644 --- a/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html +++ b/docs/rust/head/re_chunk_store/store/type.ChunkIdPerComponentName.html @@ -1 +1 @@ -Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
Type Alias re_chunk_store::store::ChunkIdPerComponentName
source · pub type ChunkIdPerComponentName = IntMap<ComponentName, ChunkId>;
Aliased Type§
struct ChunkIdPerComponentName { /* private fields */ }
Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
Type Alias re_chunk_store::store::ChunkIdPerComponentNamePerEntity
source · pub type ChunkIdPerComponentNamePerEntity = IntMap<EntityPath, IntMap<ComponentName, ChunkId>>;
Aliased Type§
struct ChunkIdPerComponentNamePerEntity { /* private fields */ }
Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
Type Alias re_chunk_store::store::ChunkIdSet
source · pub type ChunkIdSet = BTreeSet<ChunkId>;
Aliased Type§
struct ChunkIdSet { /* private fields */ }
Struct re_chunk_store::ChunkCompactionReport
source · pub struct ChunkCompactionReport {
- pub srcs: BTreeMap<ChunkId, Arc<Chunk>>,
- pub new_chunk: Arc<Chunk>,
-}
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
-new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
+ pub srcs: BTreeMap<ChunkId, Arc<Chunk>>, + pub new_chunk: Arc<Chunk>, +}Expand description
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
Fields§
§srcs: BTreeMap<ChunkId, Arc<Chunk>>
The chunks that were merged into a new chunk.
+new_chunk: Arc<Chunk>
The new chunk that was created as the result of the compaction.
Trait Implementations§
source§impl Clone for ChunkCompactionReport
impl Clone for ChunkCompactionReport
source§fn clone(&self) -> ChunkCompactionReport
fn clone(&self) -> ChunkCompactionReport
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkCompactionReport
impl Debug for ChunkCompactionReport
source§impl PartialEq for ChunkCompactionReport
impl PartialEq for ChunkCompactionReport
Auto Trait Implementations§
impl Freeze for ChunkCompactionReport
impl !RefUnwindSafe for ChunkCompactionReport
impl Send for ChunkCompactionReport
impl Sync for ChunkCompactionReport
impl Unpin for ChunkCompactionReport
impl !UnwindSafe for ChunkCompactionReport
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStore.html b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
index c542fe78373b..3f40d0764c2f 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStore.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStore.html
@@ -4,12 +4,12 @@
pub(crate) config: ChunkStoreConfig,
pub(crate) type_registry: IntMap<ComponentName, DataType>,
pub(crate) per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>,
- pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
- pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
+ pub(crate) chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>,
+ pub(crate) chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>,
pub(crate) temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>,
pub(crate) temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>,
pub(crate) temporal_chunks_stats: ChunkStoreChunkStats,
- pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
+ pub(crate) static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>,
pub(crate) static_chunks_stats: ChunkStoreChunkStats,
pub(crate) insert_id: u64,
pub(crate) query_id: AtomicU64,
@@ -23,28 +23,28 @@
§type_registry: IntMap<ComponentName, DataType>
Keeps track of the latest datatype information for all component types that have been written
to the store so far.
See also Self::lookup_datatype
.
-§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
§per_column_metadata: IntMap<EntityPath, IntMap<ComponentName, IntMap<ComponentDescriptor, ColumnMetadataState>>>
§chunks_per_chunk_id: BTreeMap<ChunkId, Arc<Chunk>>
§chunk_ids_per_min_row_id: BTreeMap<RowId, Vec<ChunkId>>
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
-§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
+This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
+§temporal_chunk_ids_per_entity_per_component: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, ChunkIdSetPerTime>>>
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
+§temporal_chunk_ids_per_entity: IntMap<EntityPath, IntMap<Timeline, ChunkIdSetPerTime>>
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal Chunk
s currently present in the store.
+§temporal_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
+§static_chunk_ids_per_entity: IntMap<EntityPath, IntMap<ComponentName, ChunkId>>
Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static Chunk
s currently present in the store.
+§static_chunks_stats: ChunkStoreChunkStats
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
§insert_id: u64
Monotonically increasing ID for insertions.
§query_id: AtomicU64
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@
) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations
component on each timeline. The only practical guarantee this gives is that a latest-at query
with a value of max-int will be unchanged. However, latest-at queries from other arbitrary
points in time may provide different results pre- and post- GC.
-
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
+sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
sourcefn gc_drop_at_least_num_bytes(
&mut self,
options: &GarbageCollectionOptions,
num_bytes_to_drop: f64,
- protected_chunk_ids: &BTreeSet<ChunkId>,
-) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
+ protected_chunk_ids: &BTreeSet<ChunkId>,
+) -> Vec<ChunkStoreDiff>sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
+) -> Vec<ChunkStoreDiff>Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
@@ -229,65 +229,65 @@ §Limitations
This ignores static data.
source§impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
-The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
sourcefn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
temporal_chunk_ids_per_time: &ChunkIdSetPerTime,
-) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
+) -> Vec<Arc<Chunk>>Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally
override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
+) -> Vec<Arc<Chunk>>Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with
the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
sourcefn range<'a>(
&'a self,
- query: &RangeQuery,
+ query: &RangeQuery,
temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>,
-) -> Vec<Arc<Chunk>>
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
sourcepub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@ §Limitations
sourcepub fn id(&self) -> StoreId
sourcepub fn set_info(&mut self, info: StoreInfo)
sourcepub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
-sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
+sourcepub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@ §Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@ §Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
+ chunk: &Arc<Chunk>,
+) -> ChunkStoreResult<Vec<ChunkStoreEvent>>Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
-- Inserting a duplicated
ChunkId
will result in a no-op.
-- Inserting an empty
Chunk
will result in a no-op.
+- Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+- Inserting a duplicated [
ChunkId
] will result in a no-op.
+- Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines
and components – it will have many direct neighbors.
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
index 1da4f747064a..516b3e536610 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html
@@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
§num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Method which takes an iterator and generates Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
sourcepub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
Performs the +=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
Returns the “default value” for a type. Read moresource§impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
Performs the -=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
source§impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
§impl Freeze for ChunkStoreChunkStats
§impl RefUnwindSafe for ChunkStoreChunkStats
§impl Send for ChunkStoreChunkStats
§impl Sync for ChunkStoreChunkStats
§impl Unpin for ChunkStoreChunkStats
§impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Keeps track of the latest datatype information for all component types that have been written to the store so far.
See also Self::lookup_datatype
.
All [ChunkId
]s currently in the store, indexed by the smallest [RowId
] in each of them.
This is effectively all chunks in global data order. Used for garbage collection.
-This is a map of vecs instead of individual ChunkId
in order to better support
-duplicated RowId
s.
All temporal ChunkId
s for all entities on all timelines, further indexed by [ComponentName
].
This is a map of vecs instead of individual [ChunkId
] in order to better support
+duplicated [RowId
]s.
All temporal [ChunkId
]s for all entities on all timelines, further indexed by [ComponentName
].
See also:
-All temporal ChunkId
s for all entities on all timelines, without the [ComponentName
] index.
All temporal [ChunkId
]s for all entities on all timelines, without the [ComponentName
] index.
See also:
-Accumulated size statitistics for all temporal Chunk
s currently present in the store.
Accumulated size statitistics for all temporal [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
-Static data. Never garbage collected.
+Static data. Never garbage collected.
Static data unconditionally shadows temporal data at query time.
Existing temporal will not be removed. Events won’t be fired.
-Accumulated size statitistics for all static Chunk
s currently present in the store.
Accumulated size statitistics for all static [Chunk
]s currently present in the store.
This is too costly to be computed from scratch every frame, and is required by e.g. the GC.
Monotonically increasing ID for insertions.
Monotonically increasing ID for queries.
@@ -91,7 +91,7 @@ ) -> (Vec<ChunkStoreEvent>, ChunkStoreStats)Triggers a garbage collection according to the desired target
.
Returns the list of Chunk
s that were purged from the store in the form of ChunkStoreEvent
s.
§Semantics
-Garbage collection works on a chunk-level basis and is driven by RowId
order
+
Garbage collection works on a chunk-level basis and is driven by [RowId
] order
(specifically, the smallest RowId
of each respective Chunk), i.e. the order defined
by the clients’ wall-clocks, allowing it to drop data across the different timelines in
a fair, deterministic manner.
@@ -105,20 +105,20 @@
§Limitations component on each timeline. The only practical guarantee this gives is that a latest-at query with a value of max-int will be unchanged. However, latest-at queries from other arbitrary points in time may provide different results pre- and post- GC. -
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
fn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest ChunkId
s.
sourcefn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
fn find_all_protected_chunk_ids(&self, target_count: usize) -> BTreeSet<ChunkId>
For each EntityPath
, Timeline
, Component
find the N latest [ChunkId
]s.
fn gc_drop_at_least_num_bytes( &mut self, options: &GarbageCollectionOptions, num_bytes_to_drop: f64, - protected_chunk_ids: &BTreeSet<ChunkId>, -) -> Vec<ChunkStoreDiff>
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
pub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal ChunkId
from all indices.
sourcepub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
pub(crate) fn remove_chunk(&mut self, chunk_id: ChunkId) -> Vec<ChunkStoreDiff>
Surgically removes a temporal [ChunkId
] from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices.
See also ChunkStore::remove_chunks
.
sourcepub(crate) fn remove_chunks(
&mut self,
- chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
+ chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>,
time_budget: Option<(Instant, Duration)>,
-) -> Vec<ChunkStoreDiff>
pub(crate) fn remove_chunks( &mut self, - chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>, + chunk_ids_to_be_removed: IntMap<EntityPath, IntMap<Timeline, IntMap<ComponentName, HashMap<TimeInt, Vec<ChunkId>>>>>, time_budget: Option<(Instant, Duration)>, -) -> Vec<ChunkStoreDiff>
Surgically removes a set of temporal ChunkId
s from all indices.
Surgically removes a set of temporal [ChunkId
]s from all indices.
This is orders of magnitude faster than trying to retain()
on all our internal indices,
when you already know where these chunks live.
See also ChunkStore::remove_chunk
.
§Limitations
This ignores static data.
source§impl ChunkStore
impl ChunkStore
sourcepub fn latest_at_relevant_chunks(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
pub fn latest_at_relevant_chunks( &self, - query: &LatestAtQuery, + query: &LatestAtQuery, entity_path: &EntityPath, component_name: ComponentName, -) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given LatestAtQuery
and [ComponentName
].
Returns the most-relevant chunk(s) for the given [LatestAtQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
If the entity has static component data associated with it, it will unconditionally override any temporal component data.
sourcepub fn latest_at_relevant_chunks_for_all_components(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
pub fn latest_at_relevant_chunks_for_all_components( &self, - query: &LatestAtQuery, + query: &LatestAtQuery, entity_path: &EntityPath, -) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given LatestAtQuery
.
Returns the most-relevant temporal chunk(s) for the given [LatestAtQuery
].
The returned vector is guaranteed free of duplicates, by definition.
-The ChunkStore
always work at the Chunk
level (as opposed to the row level): it is
+
The ChunkStore
always work at the [Chunk
] level (as opposed to the row level): it is
oblivious to the data therein.
-For that reason, and because Chunk
s are allowed to temporally overlap, it is possible
+For that reason, and because [Chunk
]s are allowed to temporally overlap, it is possible
that a query has more than one relevant chunk.
The caller should filter the returned chunks further (see Chunk::latest_at
) in order to
+
The caller should filter the returned chunks further (see [Chunk::latest_at
]) in order to
determine what exact row contains the final result.
This ignores static data.
fn latest_at( &self, - query: &LatestAtQuery, + query: &LatestAtQuery, temporal_chunk_ids_per_time: &ChunkIdSetPerTime, -) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
impl ChunkStore
sourcepub fn range_relevant_chunks(
+) -> Option<Vec<Arc<Chunk>>>
pub fn range_relevant_chunks( +) -> Option<Vec<Arc<Chunk>>>
source§impl ChunkStore
impl ChunkStore
sourcepub fn range_relevant_chunks(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_name: ComponentName,
-) -> Vec<Arc<Chunk>>
pub fn range_relevant_chunks( &self, - query: &RangeQuery, + query: &RangeQuery, entity_path: &EntityPath, component_name: ComponentName, -) -> Vec<Arc<Chunk>>
Returns the most-relevant chunk(s) for the given RangeQuery
and [ComponentName
].
Returns the most-relevant chunk(s) for the given [RangeQuery
] and [ComponentName
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
If the entity has static component data associated with it, it will unconditionally override any temporal component data.
sourcepub fn range_relevant_chunks_for_all_components(
&self,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
-) -> Vec<Arc<Chunk>>
pub fn range_relevant_chunks_for_all_components( &self, - query: &RangeQuery, + query: &RangeQuery, entity_path: &EntityPath, -) -> Vec<Arc<Chunk>>
Returns the most-relevant temporal chunk(s) for the given RangeQuery
.
Returns the most-relevant temporal chunk(s) for the given [RangeQuery
].
The returned vector is guaranteed free of duplicates, by definition.
The criteria for returning a chunk is only that it may contain data that overlaps with the queried range.
-The caller should filter the returned chunks further (see Chunk::range
) in order to
+
The caller should filter the returned chunks further (see [Chunk::range
]) in order to
determine how exactly each row of data fit with the rest.
This ignores static data.
fn range<'a>( &'a self, - query: &RangeQuery, + query: &RangeQuery, temporal_chunk_ids_per_times: impl Iterator<Item = &'a ChunkIdSetPerTime>, -) -> Vec<Arc<Chunk>>
source§impl ChunkStore
impl ChunkStore
pub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
impl ChunkStore
pub fn stats(&self) -> ChunkStoreStats
source§impl ChunkStore
impl ChunkStore
§Entity stats
sourcepub fn entity_stats_static(
&self,
entity_path: &EntityPath,
) -> ChunkStoreChunkStats
pub fn entity_stats_static( &self, entity_path: &EntityPath, ) -> ChunkStoreChunkStats
Stats about all chunks with static data for an entity.
@@ -325,8 +325,8 @@§Limitations
pub fn id(&self) -> StoreId
pub fn set_info(&mut self, info: StoreInfo)
pub fn info(&self) -> Option<&StoreInfo>
sourcepub fn generation(&self) -> ChunkStoreGeneration
pub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
sourcepub fn config(&self) -> &ChunkStoreConfig
pub fn config(&self) -> &ChunkStoreConfig
See ChunkStoreConfig
for more information about configuration.
sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
pub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending ChunkId
order.
sourcepub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
pub fn iter_chunks(&self) -> impl Iterator<Item = &Arc<Chunk>> + '_
Iterate over all chunks in the store, in ascending [ChunkId
] order.
sourcepub fn num_chunks(&self) -> usize
pub fn num_chunks(&self) -> usize
Get the number of chunks.
sourcepub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
pub fn lookup_datatype(
&self,
@@ -340,7 +340,7 @@ §Limitations
source§impl ChunkStore
impl ChunkStore
sourcepub fn from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, Self>>
pub fn from_rrd_filepath( store_config: &ChunkStoreConfig, path_to_rrd: impl AsRef<Path>, - version_policy: VersionPolicy, + version_policy: VersionPolicy, ) -> Result<BTreeMap<StoreId, Self>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
The stores will be prefilled with the data at the specified path.
See also:
@@ -359,7 +359,7 @@§Limitations
sourcepub fn handle_from_rrd_filepath(
store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
+ version_policy: VersionPolicy,
) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
pub fn handle_from_rrd_filepath( store_config: &ChunkStoreConfig, path_to_rrd: impl AsRef<Path>, - version_policy: VersionPolicy, + version_policy: VersionPolicy, ) -> Result<BTreeMap<StoreId, ChunkStoreHandle>>
Instantiate a new ChunkStore
with the given ChunkStoreConfig
.
Wraps the results in ChunkStoreHandle
s.
The stores will be prefilled with the data at the specified path.
@@ -423,20 +423,20 @@§Ordering
sourcepub(crate) fn on_events(events: &[ChunkStoreEvent])
pub(crate) fn on_events(events: &[ChunkStoreEvent])
Called by ChunkStore
’s mutating methods to notify subscriber subscribers of upcoming events.
source§impl ChunkStore
impl ChunkStore
sourcepub fn insert_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
pub fn insert_chunk( &mut self, - chunk: &Arc<Chunk>, -) -> ChunkStoreResult<Vec<ChunkStoreEvent>>
Inserts a Chunk
in the store.
Inserts a [Chunk
] in the store.
Iff the store was modified, all registered subscribers will be notified and the
resulting ChunkStoreEvent
will be returned, or None
otherwise.
-
-
- Trying to insert an unsorted chunk (
Chunk::is_sorted
) will fail with an error.
- - Inserting a duplicated
ChunkId
will result in a no-op.
- - Inserting an empty
Chunk
will result in a no-op.
+ - Trying to insert an unsorted chunk ([
Chunk::is_sorted
]) will fail with an error.
+ - Inserting a duplicated [
ChunkId
] will result in a no-op.
+ - Inserting an empty [
Chunk
] will result in a no-op.
sourcefn find_and_elect_compaction_candidate(
&self,
- chunk: &Arc<Chunk>,
-) -> Option<Arc<Chunk>>
fn find_and_elect_compaction_candidate( &self, - chunk: &Arc<Chunk>, -) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
-The algorithm is simple: for each incoming Chunk
, we take a look at its future neighbors.
+ chunk: &Arc<Chunk>,
+) -> Option<Arc<Chunk>>
Finds the most appropriate candidate for compaction.
+The algorithm is simple: for each incoming [Chunk
], we take a look at its future neighbors.
Each neighbor is a potential candidate for compaction.
Because the chunk is going to be inserted into many different indices – for each of its timelines and components – it will have many direct neighbors. diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html index 1da4f747064a..516b3e536610 100644 --- a/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html +++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreChunkStats.html @@ -20,7 +20,7 @@
This is usually the same as the number of log calls the user made.
Each row can contain multiple events (see Self::num_events
).
num_events: u64
How many component batches (“cells”).
-Implementations§
source§impl ChunkStoreChunkStats
impl ChunkStoreChunkStats
pub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
fn add_assign(&mut self, rhs: Self)
+=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
fn clone(&self) -> ChunkStoreChunkStats
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkStoreChunkStats
impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
fn default() -> ChunkStoreChunkStats
source§impl Display for ChunkStoreChunkStats
impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
fn sub_assign(&mut self, rhs: Self)
-=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
impl Sum for ChunkStoreChunkStats
source§fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
fn sum<I: Iterator<Item = Self>>(iter: I) -> Self
Self
from the elements by
+Implementations§
source§impl ChunkStoreChunkStats
impl ChunkStoreChunkStats
pub fn from_chunk(chunk: &Arc<Chunk>) -> Self
Trait Implementations§
source§impl Add for ChunkStoreChunkStats
impl Add for ChunkStoreChunkStats
source§impl AddAssign for ChunkStoreChunkStats
impl AddAssign for ChunkStoreChunkStats
source§fn add_assign(&mut self, rhs: Self)
fn add_assign(&mut self, rhs: Self)
+=
operation. Read moresource§impl Clone for ChunkStoreChunkStats
impl Clone for ChunkStoreChunkStats
source§fn clone(&self) -> ChunkStoreChunkStats
fn clone(&self) -> ChunkStoreChunkStats
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkStoreChunkStats
impl Debug for ChunkStoreChunkStats
source§impl Default for ChunkStoreChunkStats
impl Default for ChunkStoreChunkStats
source§fn default() -> ChunkStoreChunkStats
fn default() -> ChunkStoreChunkStats
source§impl Display for ChunkStoreChunkStats
impl Display for ChunkStoreChunkStats
source§impl Sub for ChunkStoreChunkStats
impl Sub for ChunkStoreChunkStats
source§impl SubAssign for ChunkStoreChunkStats
impl SubAssign for ChunkStoreChunkStats
source§fn sub_assign(&mut self, rhs: Self)
fn sub_assign(&mut self, rhs: Self)
-=
operation. Read moresource§impl Sum for ChunkStoreChunkStats
impl Sum for ChunkStoreChunkStats
impl Copy for ChunkStoreChunkStats
Auto Trait Implementations§
impl Freeze for ChunkStoreChunkStats
impl RefUnwindSafe for ChunkStoreChunkStats
impl Send for ChunkStoreChunkStats
impl Sync for ChunkStoreChunkStats
impl Unpin for ChunkStoreChunkStats
impl UnwindSafe for ChunkStoreChunkStats
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
index 4cf36b84d5c4..959915c43321 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreConfig.html
@@ -7,7 +7,7 @@
any way (insertion, GC), that can be subscribed to.
Leaving this disabled can lead to major performance improvements on the ingestion path
in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-§chunk_max_bytes: u64
What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
+§chunk_max_bytes: u64
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good.
@@ -20,7 +20,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-
§chunk_max_rows: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@
which is the default row threshold.
The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g.
sending chunks over the network.
-§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
+§chunk_max_rows_if_unsorted: u64
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
index 23de0196fb1f..14426fc014f5 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html
@@ -1,12 +1,12 @@
ChunkStoreDiff in re_chunk_store - Rust Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ChunkId
]s being unique and non-reusable.
- Similarly, garbage collection always removes all the data associated with a chunk in one go:
there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
@@ -20,23 +20,23 @@
That doesn’t necessarily mean that the data is actually gone, i.e. don’t make assumptions of e.g. the size
in bytes of the store based on these events.
They are in “query-model space” and are not an accurate representation of what happens in storage space.
-§chunk: Arc<Chunk>
The chunk that was added or removed.
+§chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is
what will be exposed here.
This allows subscribers to only process data that is new, as opposed to having to reprocess
old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
-§compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
-A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
+A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
sourcepub fn addition(
- chunk: Arc<Chunk>,
+ chunk: Arc<Chunk>,
compacted: Option<ChunkCompactionReport>,
-) -> Self
sourcepub fn deletion(chunk: Arc<Chunk>) -> Self
sourcepub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
source§impl Eq for ChunkStoreDiff
Auto Trait Implementations§
§impl Freeze for ChunkStoreDiff
§impl !RefUnwindSafe for ChunkStoreDiff
§impl Send for ChunkStoreDiff
§impl Sync for ChunkStoreDiff
§impl Unpin for ChunkStoreDiff
§impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Leaving this disabled can lead to major performance improvements on the ingestion path in some workloads, provided that the subscribers aren’t needed (e.g. headless mode).
-What is the threshold, in bytes, after which a Chunk
cannot be compacted any further?
What is the threshold, in bytes, after which a [Chunk
] cannot be compacted any further?
This is a multi-dimensional trade-off:
- Larger chunks lead to less fixed overhead introduced by metadata, indices and such. Good. @@ -20,7 +20,7 @@ which is the default row threshold. The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g. sending chunks over the network. -
What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows_if_unsorted
.
This is a multi-dimensional trade-off:
@@ -35,7 +35,7 @@ which is the default row threshold. The default byte threshold is set to 8MiB, which is a reasonable unit of work when e.g. sending chunks over the network. -What is the threshold, in rows, after which a Chunk
cannot be compacted any further?
What is the threshold, in rows, after which a [Chunk
] cannot be compacted any further?
This specifically applies to non time-sorted chunks.
See also ChunkStoreConfig::chunk_max_rows
.
This is a multi-dimensional trade-off:
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html index 23de0196fb1f..14426fc014f5 100644 --- a/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html +++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreDiff.html @@ -1,12 +1,12 @@Struct re_chunk_store::ChunkStoreDiff
source · pub struct ChunkStoreDiff {
pub kind: ChunkStoreDiffKind,
- pub chunk: Arc<Chunk>,
+ pub chunk: Arc<Chunk>,
pub compacted: Option<ChunkCompactionReport>,
}
Expand description
Describes an atomic change in the Rerun ChunkStore
: a chunk has been added or deleted.
From a query model standpoint, the ChunkStore
always operates one chunk at a time:
- The contents of a chunk (i.e. its columns) are immutable past insertion, by virtue of
-
ChunkId
s being unique and non-reusable.
+[ - Similarly, garbage collection always removes all the data associated with a chunk in one go: there cannot be orphaned columns. When a chunk is gone, all data associated with it is gone too.
ChunkId
]s being unique and non-reusable.
chunk: Arc<Chunk>
The chunk that was added or removed.
+chunk: Arc<Chunk>
The chunk that was added or removed.
If the addition of a chunk to the store triggered a compaction, that chunk pre-compaction is what will be exposed here. This allows subscribers to only process data that is new, as opposed to having to reprocess old rows that appear to have been removed and then reinserted due to compaction.
To keep track of what chunks were merged with what chunks, use the ChunkStoreDiff::compacted
field below.
compacted: Option<ChunkCompactionReport>
§compacted: Option<ChunkCompactionReport>
Reports which [Chunk
]s were merged into a new [Chunk
] during a compaction.
This is only specified if an addition to the store triggered a compaction.
When that happens, it is guaranteed that ChunkStoreDiff::chunk
will be present in the
set of source chunks below, since it was compacted on arrival.
A corollary to that is that the destination Chunk
must have never been seen before,
-i.e. it’s ChunkId
must have never been seen before.
A corollary to that is that the destination [Chunk
] must have never been seen before,
+i.e. it’s [ChunkId
] must have never been seen before.
Implementations§
source§impl ChunkStoreDiff
impl ChunkStoreDiff
pub fn addition( - chunk: Arc<Chunk>, + chunk: Arc<Chunk>, compacted: Option<ChunkCompactionReport>, -) -> Self
pub fn deletion(chunk: Arc<Chunk>) -> Self
pub fn is_static(&self) -> bool
Trait Implementations§
source§impl Clone for ChunkStoreDiff
impl Clone for ChunkStoreDiff
source§fn clone(&self) -> ChunkStoreDiff
fn clone(&self) -> ChunkStoreDiff
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for ChunkStoreDiff
impl Debug for ChunkStoreDiff
source§impl PartialEq for ChunkStoreDiff
impl PartialEq for ChunkStoreDiff
impl Eq for ChunkStoreDiff
Auto Trait Implementations§
impl Freeze for ChunkStoreDiff
impl !RefUnwindSafe for ChunkStoreDiff
impl Send for ChunkStoreDiff
impl Sync for ChunkStoreDiff
impl Unpin for ChunkStoreDiff
impl !UnwindSafe for ChunkStoreDiff
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
index 29f140d21640..11efa396a68d 100644
--- a/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
+++ b/docs/rust/head/re_chunk_store/struct.ChunkStoreEvent.html
@@ -5,7 +5,7 @@
pub diff: ChunkStoreDiff,
}Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
+[Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
§protect_latest: usize
How many component revisions to preserve on each timeline.
§protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
sourcepub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moreAuto Trait Implementations§
§impl Freeze for GarbageCollectionOptions
§impl RefUnwindSafe for GarbageCollectionOptions
§impl Send for GarbageCollectionOptions
§impl Sync for GarbageCollectionOptions
§impl Unpin for GarbageCollectionOptions
§impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Expand description
The atomic unit of change in the Rerun ChunkStore
.
A ChunkStoreEvent
describes the changes caused by the addition or deletion of a
-Chunk
in the store.
Chunk
] in the store.
Methods that mutate the ChunkStore
, such as ChunkStore::insert_chunk
and ChunkStore::gc
,
return ChunkStoreEvent
s that describe the changes.
You can also register your own ChunkStoreSubscriber
in order to be notified of changes as soon as they
diff --git a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
index 121c5ec88658..e1c0ad9d1eb7 100644
--- a/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
+++ b/docs/rust/head/re_chunk_store/struct.GarbageCollectionOptions.html
@@ -15,7 +15,7 @@
The default is an unbounded time budget (i.e. throughput only).
protect_latest: usize
How many component revisions to preserve on each timeline.
protected_time_ranges: HashMap<Timeline, ResolvedTimeRange>
Do not remove any data within these time ranges.
-Implementations§
source§impl GarbageCollectionOptions
impl GarbageCollectionOptions
pub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
pub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
+Implementations§
source§impl GarbageCollectionOptions
impl GarbageCollectionOptions
pub fn gc_everything() -> Self
sourcepub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
pub fn is_chunk_protected(&self, chunk: &Chunk) -> bool
If true, we cannot remove this chunk.
Trait Implementations§
source§impl Clone for GarbageCollectionOptions
impl Clone for GarbageCollectionOptions
source§fn clone(&self) -> GarbageCollectionOptions
fn clone(&self) -> GarbageCollectionOptions
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl Freeze for GarbageCollectionOptions
impl RefUnwindSafe for GarbageCollectionOptions
impl Send for GarbageCollectionOptions
impl Sync for GarbageCollectionOptions
impl Unpin for GarbageCollectionOptions
impl UnwindSafe for GarbageCollectionOptions
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
index a9d0270f208a..eb410b83e45c 100644
--- a/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
+++ b/docs/rust/head/re_crash_handler/fn.install_crash_handlers.html
@@ -1,4 +1,4 @@
-install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
+
install_crash_handlers in re_crash_handler - Rust Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes)
that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
+install_panic_hook in re_crash_handler - Rust Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
index 6ec55245d18b..b51d51db4947 100644
--- a/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
+++ b/docs/rust/head/re_crash_handler/fn.install_signal_handler.html
@@ -1 +1 @@
-install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
+install_signal_handler in re_crash_handler - Rust Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
\ No newline at end of file
diff --git a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
index 996c1261069c..f53e2f362779 100644
--- a/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
+++ b/docs/rust/head/re_crash_handler/static.BUILD_INFO.html
@@ -1 +1 @@
-BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
+BUILD_INFO in re_crash_handler - Rust Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
index 6225f6cad705..c0f225fc6da4 100644
--- a/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/engine/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes) +
Function re_crash_handler::install_crash_handlers
source · pub fn install_crash_handlers(build_info: BuildInfo)
Expand description
Install handlers for panics and signals (crashes) that prints helpful messages and sends anonymous analytics.
NOTE: only install these in binaries!
-
diff --git a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
index 01b7a3ba3ef9..f510f12b6d82 100644
--- a/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
+++ b/docs/rust/head/re_crash_handler/fn.install_panic_hook.html
@@ -1 +1 @@
-
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute aQueryExpression
on the recording.
Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
Function re_crash_handler::install_panic_hook
source · pub(crate) fn install_panic_hook(_build_info: BuildInfo)
Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
Function re_crash_handler::install_signal_handler
source · pub(crate) fn install_signal_handler(build_info: BuildInfo)
Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
Static re_crash_handler::BUILD_INFO
source · pub(crate) static BUILD_INFO: Mutex<Option<BuildInfo>>
See the following methods:
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
impl QueryEngine<StorageEngine>
pub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
pub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
impl QueryEngine<StorageEngine>
pub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
pub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
pub fn from_rrd_filepath( - store_config: &ChunkStoreConfig, + store_config: &ChunkStoreConfig, path_to_rrd: impl AsRef<Path>, - version_policy: VersionPolicy, -) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
pub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@frame_nr
, log_time
, …);Color
, Radius, ...
).sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
pub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
pub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
,log_time
, …); - second, the component columns in lexical order (
Color
,Radius, ...
).
sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
pub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
pub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
pub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
index 7c570bc1091c..913e4409b2f3 100644
--- a/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/engine/type.RecordBatch.html
@@ -1,4 +1,4 @@
-RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.
Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe::engine - Rust Type Alias re_dataframe::engine::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_dataframe/external/index.html b/docs/rust/head/re_dataframe/external/index.html
index e1ab8b36b4df..5c00adeb54a9 100644
--- a/docs/rust/head/re_dataframe/external/index.html
+++ b/docs/rust/head/re_dataframe/external/index.html
@@ -1 +1 @@
-re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
+re_dataframe::external - Rust Module re_dataframe::external
source · Re-exports§
pub use re_chunk;
pub use re_chunk_store;
pub use re_log_types;
pub use re_query;
pub use arrow2;
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/index.html b/docs/rust/head/re_dataframe/index.html
index 1b62d95a26ca..d092ce5ff99c 100644
--- a/docs/rust/head/re_dataframe/index.html
+++ b/docs/rust/head/re_dataframe/index.html
@@ -1,2 +1,2 @@
re_dataframe - Rust Crate re_dataframe
source · Expand description
The Rerun public data APIs. Get dataframes back from your Rerun datastore.
-Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
+Re-exports§
pub use self::external::arrow2::chunk::Chunk as Arrow2Chunk;
pub use self::external::re_chunk::util::concatenate_record_batches;
pub use self::external::re_chunk::TransportChunk;
pub use self::external::re_chunk_store::ChunkStoreConfig;
pub use self::external::re_chunk_store::ChunkStoreHandle;
pub use self::external::re_chunk_store::ColumnSelector;
pub use self::external::re_chunk_store::ComponentColumnSelector;
pub use self::external::re_chunk_store::Index;
pub use self::external::re_chunk_store::IndexRange;
pub use self::external::re_chunk_store::IndexValue;
pub use self::external::re_chunk_store::QueryExpression;
pub use self::external::re_chunk_store::SparseFillStrategy;
pub use self::external::re_chunk_store::TimeColumnSelector;
pub use self::external::re_chunk_store::ViewContentsSelector;
pub use self::external::re_log_types::EntityPath;
pub use self::external::re_log_types::EntityPathFilter;
pub use self::external::re_log_types::ResolvedTimeRange;
pub use self::external::re_log_types::StoreKind;
pub use self::external::re_log_types::TimeInt;
pub use self::external::re_log_types::Timeline;
pub use self::external::re_query::QueryCache;
pub use self::external::re_query::QueryCacheHandle;
pub use self::external::re_query::StorageEngine;
Modules§
Structs§
- A handle to our user-facing query engine.
- A handle to a dataframe query, ready to be executed.
Type Aliases§
\ No newline at end of file
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
index 7f137512a0ff..24331ce79da9 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
index 17a5f28fd51c..6807551dda17 100644
--- a/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
+++ b/docs/rust/head/re_dataframe/query/struct.QueryHandleState.html
@@ -1,30 +1,30 @@
QueryHandleState in re_dataframe::query - Rust Struct re_dataframe::query::QueryHandleState
source · struct QueryHandleState {
- view_contents: Vec<ColumnDescriptor>,
- selected_contents: Vec<(usize, ColumnDescriptor)>,
- selected_static_values: Vec<Option<UnitChunkShared>>,
- filtered_index: Index,
+ view_contents: Vec<ColumnDescriptor>,
+ selected_contents: Vec<(usize, ColumnDescriptor)>,
+ selected_static_values: Vec<Option<UnitChunkShared>>,
+ filtered_index: Index,
arrow_schema: Schema,
- view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
+ view_chunks: Vec<Vec<(AtomicU64, Chunk)>>,
cur_row: AtomicU64,
- unique_index_values: Vec<IndexValue>,
+ unique_index_values: Vec<IndexValue>,
}
Expand description
Internal private state. Lazily computed.
-Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
+Fields§
§view_contents: Vec<ColumnDescriptor>
Describes the columns that make up this view.
+
+§selected_contents: Vec<(usize, ColumnDescriptor)>
Describes the columns specifically selected to be returned from this view.
All returned rows will have an Arrow schema that matches this selection.
Columns that do not yield any data will still be present in the results, filled with null values.
The extra usize
is the index in QueryHandleState::view_contents
that this selection
points to.
See also QueryHandleState::arrow_schema
.
-§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
+§selected_static_values: Vec<Option<UnitChunkShared>>
This keeps track of the static data associated with each entry in selected_contents
, if any.
This is queried only once during init, and will override all cells that follow.
selected_contents
: QueryHandleState::selected_contents
-§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
+§filtered_index: Index
The actual index filter in use, since the user-specified one is optional.
This just defaults to Index::default()
if the user hasn’t specified any: the actual
value is irrelevant since this means we are only concerned with static data anyway.
§arrow_schema: Schema
The Arrow schema that corresponds to the selected_contents
.
All returned rows will have this schema.
-§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the Chunk
s included in the view contents.
+§view_chunks: Vec<Vec<(AtomicU64, Chunk)>>
All the [Chunk
]s included in the view contents.
These are already sorted, densified, vertically sliced, and latest-deduped according
to the query.
The atomic counter is used as a cursor which keeps track of our current position within
@@ -38,7 +38,7 @@
unrelated to the cursors used to track the current position in each individual chunk.
The corresponding index value can be obtained using unique_index_values[cur_row]
.
unique_index_values[cur_row]
: QueryHandleState::unique_index_values
-§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
+§unique_index_values: Vec<IndexValue>
All unique index values that can possibly be returned by this query.
Guaranteed ascendingly sorted and deduped.
See also QueryHandleState::cur_row
.
Auto Trait Implementations§
§impl !Freeze for QueryHandleState
§impl !RefUnwindSafe for QueryHandleState
§impl Send for QueryHandleState
§impl Sync for QueryHandleState
§impl Unpin for QueryHandleState
§impl !UnwindSafe for QueryHandleState
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_dataframe/struct.QueryEngine.html b/docs/rust/head/re_dataframe/struct.QueryEngine.html
index 6b3f22ef42c2..58b126adee24 100644
--- a/docs/rust/head/re_dataframe/struct.QueryEngine.html
+++ b/docs/rust/head/re_dataframe/struct.QueryEngine.html
@@ -5,16 +5,16 @@
See the following methods:
QueryEngine::schema
: get the complete schema of the recording.
-QueryEngine::query
: execute a [QueryExpression
] on the recording.
+QueryEngine::query
: execute a QueryExpression
on the recording.
-
Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
+Fields§
§engine: E
Implementations§
source§impl QueryEngine<StorageEngine>
sourcepub fn new(store: ChunkStoreHandle, cache: QueryCacheHandle) -> Self
sourcepub fn from_store(store: ChunkStoreHandle) -> Self
This will automatically instantiate a new empty QueryCache
.
sourcepub fn from_rrd_filepath(
- store_config: &ChunkStoreConfig,
+ store_config: &ChunkStoreConfig,
path_to_rrd: impl AsRef<Path>,
- version_policy: VersionPolicy,
-) -> Result<BTreeMap<StoreId, Self>>
Like [ChunkStore::from_rrd_filepath
], but automatically instantiates QueryEngine
s
+ version_policy: VersionPolicy,
+) -> Result<BTreeMap<StoreId, Self>>
Like ChunkStore::from_rrd_filepath
, but automatically instantiates QueryEngine
s
with new empty QueryCache
s.
-source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
source§impl<E: StorageEngineLike + Clone> QueryEngine<E>
sourcepub fn schema(&self) -> Vec<ColumnDescriptor>
Returns the full schema of the store.
This will include a column descriptor for every timeline and every component on every
entity that has been written to the store so far.
The order of the columns to guaranteed to be in a specific order:
@@ -22,13 +22,13 @@
first, the time columns in lexical order (frame_nr
, log_time
, …);
second, the component columns in lexical order (Color
, Radius, ...
).
-sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given [QueryExpression
].
+sourcepub fn schema_for_query(&self, query: &QueryExpression) -> Vec<ColumnDescriptor>
Returns the filtered schema for the given QueryExpression
.
The order of the columns is guaranteed to be in a specific order:
- first, the time columns in lexical order (
frame_nr
, log_time
, …);
- second, the component columns in lexical order (
Color
, Radius, ...
).
-sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
+sourcepub fn query(&self, query: QueryExpression) -> QueryHandle<E>
Starts a new query by instantiating a QueryHandle
.
sourcepub fn iter_entity_paths_sorted<'a>(
&self,
filter: &'a EntityPathFilter,
diff --git a/docs/rust/head/re_dataframe/struct.QueryHandle.html b/docs/rust/head/re_dataframe/struct.QueryHandle.html
index fb62a583e72d..d61628a9ef77 100644
--- a/docs/rust/head/re_dataframe/struct.QueryHandle.html
+++ b/docs/rust/head/re_dataframe/struct.QueryHandle.html
@@ -1,48 +1,48 @@
QueryHandle in re_dataframe - Rust Struct re_dataframe::QueryHandle
source · pub struct QueryHandle<E: StorageEngineLike> {
pub(crate) engine: E,
- pub(crate) query: QueryExpression,
+ pub(crate) query: QueryExpression,
state: OnceLock<QueryHandleState>,
}
Expand description
A handle to a dataframe query, ready to be executed.
Cheaply created via QueryEngine::query
.
Fields§
§engine: E
Handle to the QueryEngine
.
-§query: QueryExpression
The original query expression used to instantiate this handle.
+§query: QueryExpression
The original query expression used to instantiate this handle.
§state: OnceLock<QueryHandleState>
Internal private state. Lazily computed.
It is important that handles stay cheap to create.
-Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
+Implementations§
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcepub(crate) fn new(engine: E, query: QueryExpression) -> Self
source§impl<E: StorageEngineLike> QueryHandle<E>
sourcefn init(&self) -> &QueryHandleState
Lazily initialize internal private state.
It is important that query handles stay cheap to create.
-sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
+
sourcefn init_(&self, store: &ChunkStore, cache: &QueryCache) -> QueryHandleState
sourcefn compute_user_selection(
&self,
- view_contents: &[ColumnDescriptor],
- selection: &[ColumnSelector],
-) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
+ view_contents: &[ColumnDescriptor],
+ selection: &[ColumnSelector],
+) -> Vec<(usize, ColumnDescriptor)>
sourcefn fetch_view_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> (Option<usize>, Vec<Vec<(AtomicU64, Chunk)>>)
sourcefn fetch_clear_chunks(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
- view_contents: &[ColumnDescriptor],
-) -> IntMap<EntityPath, Vec<Chunk>>
Returns all potentially relevant clear Chunk
s for each unique entity path in the view contents.
+ query: &RangeQuery,
+ view_contents: &[ColumnDescriptor],
+) -> IntMap<EntityPath, Vec<Chunk>>Returns all potentially relevant clear [Chunk
]s for each unique entity path in the view contents.
These chunks take recursive clear semantics into account and are guaranteed to be properly densified.
The component data is stripped out, only the indices are left.
sourcefn fetch_chunks<'a>(
&self,
- _store: &ChunkStore,
+ _store: &ChunkStore,
cache: &QueryCache,
- query: &RangeQuery,
+ query: &RangeQuery,
entity_path: &EntityPath,
component_descrs: impl IntoIterator<Item = &'a ComponentDescriptor>,
-) -> Option<Vec<(AtomicU64, Chunk)>>
sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
-See [QueryExpression::view_contents
].
-sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
+) -> Option<Vec<(AtomicU64, Chunk)>>sourcepub fn query(&self) -> &QueryExpression
The query used to instantiate this handle.
+sourcepub fn view_contents(&self) -> &[ColumnDescriptor]
Describes the columns that make up this view.
+
+sourcepub fn selected_contents(&self) -> &[(usize, ColumnDescriptor)]
Describes the columns that make up this selection.
The extra usize
is the index in Self::view_contents
that this selection points to.
-See [QueryExpression::selection
].
+
sourcepub fn schema(&self) -> &Arrow2Schema
All results returned by this handle will strictly follow this Arrow schema.
Columns that do not yield any data will still be present in the results, filled with null values.
sourcepub fn seek_to_row(&self, row_idx: usize)
Advance all internal cursors so that the next row yielded will correspond to row_idx
.
@@ -55,7 +55,7 @@ §Performance
This requires going through every chunk once, and for each chunk running a binary search if
the chunk’s time range contains the index_value
.
I.e.: it’s pretty cheap already.
-sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
+sourcefn seek_to_index_value(&self, index_value: IndexValue)
Advance all internal cursors so that the next row yielded will correspond to index_value
.
If index_value
isn’t present in the dataset, this seeks to the first index value
available past that point, if any.
§Concurrency
@@ -74,7 +74,7 @@ §Perform
Columns that do not yield any data will still be present in the results, filled with null values.
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
ⓘwhile let Some(row) = query_handle.next_row() {
@@ -95,7 +95,7 @@ §Pagination
Each cell in the result corresponds to the latest locally known value at that particular point in
the index, for each respective ColumnDescriptor
.
-See [QueryExpression::sparse_fill_strategy
] to go beyond local resolution.
+See QueryExpression::sparse_fill_strategy
to go beyond local resolution.
Example:
sourcepub fn _next_row(
&self,
- store: &ChunkStore,
+ store: &ChunkStore,
cache: &QueryCache,
) -> Option<Vec<Box<dyn Arrow2Array>>>
sourcepub fn next_row_batch(&self) -> Option<RecordBatch>
Calls Self::next_row
and wraps the result in a RecordBatch
.
Only use this if you absolutely need a RecordBatch
as this adds a lot of allocation
diff --git a/docs/rust/head/re_dataframe/type.RecordBatch.html b/docs/rust/head/re_dataframe/type.RecordBatch.html
index dfa799767f23..6ad98e38b614 100644
--- a/docs/rust/head/re_dataframe/type.RecordBatch.html
+++ b/docs/rust/head/re_dataframe/type.RecordBatch.html
@@ -1,4 +1,4 @@
-
RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
+RecordBatch in re_dataframe - Rust Type Alias re_dataframe::RecordBatch
source · pub type RecordBatch = TransportChunk;
Aliased Type§
struct RecordBatch {
pub schema: Schema,
pub data: Chunk<Box<dyn Array>>,
}
Fields§
§schema: Schema
The schema of the dataframe, and all chunk-level and field-level metadata.
diff --git a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
index 804b59237469..1a2dfebf9f5f 100644
--- a/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
+++ b/docs/rust/head/re_entity_db/entity_db/constant.DEFAULT_GC_TIME_BUDGET.html
@@ -1,2 +1,2 @@
-DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
See [GarbageCollectionOptions::time_budget
].
+DEFAULT_GC_TIME_BUDGET in re_entity_db::entity_db - Rust Constant re_entity_db::entity_db::DEFAULT_GC_TIME_BUDGET
source · pub const DEFAULT_GC_TIME_BUDGET: Duration;
Expand description
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/index.html b/docs/rust/head/re_entity_db/entity_db/index.html
index 33cf3b3ca111..e01360f2a9b3 100644
--- a/docs/rust/head/re_entity_db/entity_db/index.html
+++ b/docs/rust/head/re_entity_db/entity_db/index.html
@@ -1 +1 @@
-re_entity_db::entity_db - Rust
\ No newline at end of file
+re_entity_db::entity_db - Rust
\ No newline at end of file
diff --git a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
index 3e86dd301c5e..595bcd10729c 100644
--- a/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
+++ b/docs/rust/head/re_entity_db/entity_db/struct.EntityDb.html
@@ -2,12 +2,12 @@
pub data_source: Option<SmartChannelSource>,
set_store_info: Option<SetStoreInfo>,
last_modified_at: Instant,
- latest_row_id: Option<RowId>,
+ latest_row_id: Option<RowId>,
entity_path_from_hash: IntMap<EntityPathHash, EntityPath>,
times_per_timeline: TimesPerTimeline,
time_histogram_per_timeline: TimeHistogramPerTimeline,
tree: EntityTree,
- storage_engine: StorageEngine,
+ storage_engine: StorageEngine,
stats: IngestionStatistics,
}Expand description
An in-memory database built from a stream of LogMsg
es.
NOTE: all mutation is to be done via public functions!
@@ -15,7 +15,7 @@
Clones of an EntityDb
gets a None
source.
§set_store_info: Option<SetStoreInfo>
Comes in a special message, LogMsg::SetStoreInfo
.
§last_modified_at: Instant
Keeps track of the last time data was inserted into this store (viewer wall-clock).
-§latest_row_id: Option<RowId>
The highest RowId
in the store,
+
§latest_row_id: Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
§entity_path_from_hash: IntMap<EntityPathHash, EntityPath>
In many places we just store the hashes, so we need a way to translate back.
@@ -26,34 +26,34 @@
TODO(#7084): Get rid of TimesPerTimeline
and implement time-stepping with crate::TimeHistogram
instead.
§time_histogram_per_timeline: TimeHistogramPerTimeline
A time histogram of all entities, for every timeline.
§tree: EntityTree
A tree-view (split on path components) of the entities.
-§storage_engine: StorageEngine
The [StorageEngine
] that backs this EntityDb
.
+§storage_engine: StorageEngine
The StorageEngine
that backs this EntityDb
.
This object and all its internal fields are never allowed to be publicly exposed,
whether that is directly or through methods, even if that’s just shared references.
-The only way to get access to the [StorageEngine
] from the outside is to use
+
The only way to get access to the StorageEngine
from the outside is to use
EntityDb::storage_engine
, which returns a read-only guard.
The design statically guarantees the absence of deadlocks and race conditions that normally
results from letting store and cache handles arbitrarily loose all across the codebase.
§stats: IngestionStatistics
Implementations§
source§impl EntityDb
sourcepub fn new(store_id: StoreId) -> Self
sourcepub fn with_store_config(
store_id: StoreId,
- store_config: ChunkStoreConfig,
-) -> Self
sourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing [StorageEngine
].
-sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing [StorageEngine
].
+ store_config: ChunkStoreConfig,
+) -> Selfsourcepub fn tree(&self) -> &EntityTree
sourcepub fn storage_engine(&self) -> StorageEngineReadGuard<'_>
Returns a read-only guard to the backing StorageEngine
.
+sourcepub fn storage_engine_arc(&self) -> StorageEngineArcReadGuard
Returns a read-only guard to the backing StorageEngine
.
That guard can be cloned at will and has a static lifetime.
It is not possible to insert any more data in this EntityDb
until the returned guard,
and any clones, have been dropped.
sourcepub fn store_info_msg(&self) -> Option<&SetStoreInfo>
sourcepub fn store_info(&self) -> Option<&StoreInfo>
sourcepub fn app_id(&self) -> Option<&ApplicationId>
sourcepub fn latest_at(
&self,
- query: &LatestAtQuery,
+ query: &LatestAtQuery,
entity_path: &EntityPath,
component_names: impl IntoIterator<Item = ComponentName>,
-) -> LatestAtResults
Queries for the given component_names
using latest-at semantics.
-See [re_query::LatestAtResults
] for more information about how to handle the results.
+) -> LatestAtResultsQueries for the given component_names
using latest-at semantics.
+See re_query::LatestAtResults
for more information about how to handle the results.
This is a cached API – data will be lazily cached upon access.
sourcepub fn latest_at_component<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -61,8 +61,8 @@
sourcepub fn latest_at_component_quiet<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<((TimeInt, RowId), C)>
Get the latest index and value for a given dense [re_types_core::Component
].
+ query: &LatestAtQuery,
+) -> Option<((TimeInt, RowId), C)>Get the latest index and value for a given dense [re_types_core::Component
].
This assumes that the row we get from the store contains at most one instance for this
component; it will log a warning otherwise.
This should only be used for “mono-components” such as Transform
and Tensor
.
@@ -70,8 +70,8 @@
sourcepub fn latest_at_component_at_closest_ancestor<C: Component>(
&self,
entity_path: &EntityPath,
- query: &LatestAtQuery,
-) -> Option<(EntityPath, (TimeInt, RowId), C)>
sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
+ query: &LatestAtQuery,
+) -> Option<(EntityPath, (TimeInt, RowId), C)>sourcepub fn store_kind(&self) -> StoreKind
sourcepub fn store_id(&self) -> StoreId
sourcepub fn cloned_from(&self) -> Option<&StoreId>
If this entity db is the result of a clone, which store was it cloned from?
A cloned store always gets a new unique ID.
We currently only use entity db cloning for blueprints:
when we activate a default blueprint that was received on the wire (e.g. from a recording),
@@ -79,9 +79,9 @@
This means all active blueprints are clones.
sourcepub fn timelines(&self) -> impl ExactSizeIterator<Item = &Timeline>
sourcepub fn times_per_timeline(&self) -> &TimesPerTimeline
sourcepub fn has_any_data_on_timeline(&self, timeline: &Timeline) -> bool
sourcepub fn time_range_for(&self, timeline: &Timeline) -> Option<ResolvedTimeRange>
Returns the time range of data on the given timeline, ignoring any static times.
sourcepub fn time_histogram(&self, timeline: &Timeline) -> Option<&TimeHistogram>
Histogram of all events on the timeeline, of all entities.
-sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
+
sourcepub fn num_rows(&self) -> u64
sourcepub fn generation(&self) -> ChunkStoreGeneration
Return the current ChunkStoreGeneration
. This can be used to determine whether the
database has been modified since the last time it was queried.
-sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
+
sourcepub fn last_modified_at(&self) -> Instant
sourcepub fn latest_row_id(&self) -> Option<RowId>
The highest RowId
in the store,
which corresponds to the last edit time.
Ignores deletions.
sourcepub fn is_empty(&self) -> bool
sourcepub fn entity_paths(&self) -> Vec<&EntityPath>
A sorted list of all the entity paths in this database.
@@ -91,21 +91,21 @@
) -> Option<&EntityPath>sourcepub fn is_known_entity(&self, entity_path: &EntityPath) -> bool
Returns true
also for entities higher up in the hierarchy.
sourcepub fn is_logged_entity(&self, entity_path: &EntityPath) -> bool
If you log world/points
, then that is a logged entity, but world
is not,
unless you log something to world
too.
-sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
+
sourcepub fn add(&mut self, msg: &LogMsg) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn add_chunk(
&mut self,
- chunk: &Arc<Chunk>,
-) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
+ chunk: &Arc<Chunk>,
+) -> Result<Vec<ChunkStoreEvent>, Error>
sourcepub fn set_store_info(&mut self, store_info: SetStoreInfo)
sourcepub fn purge_fraction_of_ram(
&mut self,
fraction_to_purge: f32,
-) -> Vec<ChunkStoreEvent>
Free up some RAM by forgetting the older parts of all timelines.
+) -> Vec<ChunkStoreEvent>Free up some RAM by forgetting the older parts of all timelines.
sourcepub fn gc(
&mut self,
- gc_options: &GarbageCollectionOptions,
-) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
+ gc_options: &GarbageCollectionOptions,
+) -> Vec<ChunkStoreEvent>
sourcepub fn drop_time_range(
&mut self,
timeline: &Timeline,
drop_range: ResolvedTimeRange,
-) -> Vec<ChunkStoreEvent>
Drop all events in the given time range from the given timeline.
+) -> Vec<ChunkStoreEvent>Drop all events in the given time range from the given timeline.
Used to implement undo (erase the last event from the blueprint db).
sourcepub fn drop_entity_path(&mut self, entity_path: &EntityPath)
Unconditionally drops all the data for a given EntityPath
.
This is not recursive. Children of this entity will not be affected.
@@ -115,39 +115,39 @@
times_per_timeline: &mut TimesPerTimeline,
time_histogram_per_timeline: &mut TimeHistogramPerTimeline,
tree: &mut EntityTree,
- engine: StorageEngineWriteGuard<'_>,
- store_events: &[ChunkStoreEvent],
+ engine: StorageEngineWriteGuard<'_>,
+ store_events: &[ChunkStoreEvent],
)sourcepub fn to_messages(
&self,
time_selection: Option<(Timeline, ResolvedTimeRangeF)>,
-) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_
Export the contents of the current database to a sequence of messages.
+) -> impl Iterator<Item = ChunkResult<LogMsg>> + '_Export the contents of the current database to a sequence of messages.
If time_selection
is specified, then only data for that specific timeline over that
specific time range will be accounted for.
source§impl EntityDb
§Stats
sourcepub fn subtree_stats_static(
&self,
- engine: &StorageEngineReadGuard<'_>,
+ engine: &StorageEngineReadGuard<'_>,
entity_path: &EntityPath,
-) -> ChunkStoreChunkStats
Returns the stats for the static store of the entity and all its children, recursively.
+) -> ChunkStoreChunkStatsReturns the stats for the static store of the entity and all its children, recursively.
This excludes temporal data.