enum Command {
AppendChunk(Chunk),
- AppendRow(EntityPath, PendingRow),
+ AppendRow(EntityPath, PendingRow),
Flush(Sender<()>),
Shutdown,
-}
Variants§
Implementations§
Auto Trait Implementations§
impl !Freeze for Command
impl !RefUnwindSafe for Command
impl Send for Command
impl Sync for Command
impl Unpin for Command
impl !UnwindSafe for Command
Blanket Implementations§
source§impl<T> Any for Twhere
+}Variants§
Implementations§
Auto Trait Implementations§
§impl !Freeze for Command
§impl !RefUnwindSafe for Command
§impl Send for Command
§impl Sync for Command
§impl Unpin for Command
§impl !UnwindSafe for Command
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
index 03f04baa7349c..a816fb23fcaa7 100644
--- a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
+++ b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
@@ -1,13 +1,13 @@
BatcherHooks in re_chunk::batcher - Rust Struct re_chunk::batcher::BatcherHooks
source · pub struct BatcherHooks {
pub on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>,
- pub on_release: Option<ArrowChunkReleaseCallback>,
+ pub on_release: Option<ArrowChunkReleaseCallback>,
}
Expand description
Callbacks you can install on the ChunkBatcher
.
Fields§
§on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>
Called when a new row arrives.
The callback is given the slice of all rows not yet batched,
including the new one.
Used for testing.
-§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
-See [re_log_types::ArrowChunkReleaseCallback
] for more information.
+§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
+See re_log_types::ArrowChunkReleaseCallback
for more information.
Implementations§
source§impl BatcherHooks
Trait Implementations§
source§impl Clone for BatcherHooks
source§fn clone(&self) -> BatcherHooks
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for BatcherHooks
source§impl Default for BatcherHooks
source§fn default() -> BatcherHooks
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for BatcherHooks
§impl !RefUnwindSafe for BatcherHooks
§impl Send for BatcherHooks
§impl Sync for BatcherHooks
§impl Unpin for BatcherHooks
§impl !UnwindSafe for BatcherHooks
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
index bac3fcce02aa4..2b12b627ebd14 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
index 4be8abe3ef9d3..17dcb20499e9e 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
@@ -5,7 +5,7 @@
}Fields§
§tx_cmds: Sender<Command>
The one and only entrypoint into the pipeline: this is never cloned nor publicly exposed,
therefore the Drop
implementation is guaranteed that no more data can come in while it’s
running.
-§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> Any for Twhere
+§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
impl<T> Any for Twhere
+}Variants§
Implementations§
Auto Trait Implementations§
§impl !Freeze for Command
§impl !RefUnwindSafe for Command
§impl Send for Command
§impl Sync for Command
§impl Unpin for Command
§impl !UnwindSafe for Command
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
index 03f04baa7349c..a816fb23fcaa7 100644
--- a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
+++ b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
@@ -1,13 +1,13 @@
BatcherHooks in re_chunk::batcher - Rust Struct re_chunk::batcher::BatcherHooks
source · pub struct BatcherHooks {
pub on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>,
- pub on_release: Option<ArrowChunkReleaseCallback>,
+ pub on_release: Option<ArrowChunkReleaseCallback>,
}
Expand description
Callbacks you can install on the ChunkBatcher
.
Fields§
§on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>
Called when a new row arrives.
The callback is given the slice of all rows not yet batched,
including the new one.
Used for testing.
-§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
-See [re_log_types::ArrowChunkReleaseCallback
] for more information.
+§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
+See re_log_types::ArrowChunkReleaseCallback
for more information.
Implementations§
source§impl BatcherHooks
Trait Implementations§
source§impl Clone for BatcherHooks
source§fn clone(&self) -> BatcherHooks
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for BatcherHooks
source§impl Default for BatcherHooks
source§fn default() -> BatcherHooks
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for BatcherHooks
§impl !RefUnwindSafe for BatcherHooks
§impl Send for BatcherHooks
§impl Sync for BatcherHooks
§impl Unpin for BatcherHooks
§impl !UnwindSafe for BatcherHooks
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
index bac3fcce02aa4..2b12b627ebd14 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
index 4be8abe3ef9d3..17dcb20499e9e 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
@@ -5,7 +5,7 @@
}Fields§
§tx_cmds: Sender<Command>
The one and only entrypoint into the pipeline: this is never cloned nor publicly exposed,
therefore the Drop
implementation is guaranteed that no more data can come in while it’s
running.
-§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> Any for Twhere
+§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
Variants§
Implementations§
Auto Trait Implementations§
impl !Freeze for Command
impl !RefUnwindSafe for Command
impl Send for Command
impl Sync for Command
impl Unpin for Command
impl !UnwindSafe for Command
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
index 03f04baa7349c..a816fb23fcaa7 100644
--- a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
+++ b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
@@ -1,13 +1,13 @@
BatcherHooks in re_chunk::batcher - Rust Struct re_chunk::batcher::BatcherHooks
source · pub struct BatcherHooks {
pub on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>,
- pub on_release: Option<ArrowChunkReleaseCallback>,
+ pub on_release: Option<ArrowChunkReleaseCallback>,
}
Expand description
Callbacks you can install on the ChunkBatcher
.
Fields§
§on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>
Called when a new row arrives.
The callback is given the slice of all rows not yet batched,
including the new one.
Used for testing.
-§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
-See [re_log_types::ArrowChunkReleaseCallback
] for more information.
+§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
+See re_log_types::ArrowChunkReleaseCallback
for more information.
Implementations§
source§impl BatcherHooks
Trait Implementations§
source§impl Clone for BatcherHooks
source§fn clone(&self) -> BatcherHooks
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for BatcherHooks
source§impl Default for BatcherHooks
source§fn default() -> BatcherHooks
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for BatcherHooks
§impl !RefUnwindSafe for BatcherHooks
§impl Send for BatcherHooks
§impl Sync for BatcherHooks
§impl Unpin for BatcherHooks
§impl !UnwindSafe for BatcherHooks
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
index bac3fcce02aa4..2b12b627ebd14 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
index 4be8abe3ef9d3..17dcb20499e9e 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
@@ -5,7 +5,7 @@
}Fields§
§tx_cmds: Sender<Command>
The one and only entrypoint into the pipeline: this is never cloned nor publicly exposed,
therefore the Drop
implementation is guaranteed that no more data can come in while it’s
running.
-§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> Any for Twhere
+§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
index 03f04baa7349c..a816fb23fcaa7 100644
--- a/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
+++ b/docs/rust/head/re_chunk/batcher/struct.BatcherHooks.html
@@ -1,13 +1,13 @@
BatcherHooks in re_chunk::batcher - Rust Struct re_chunk::batcher::BatcherHooks
source · pub struct BatcherHooks {
pub on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>,
- pub on_release: Option<ArrowChunkReleaseCallback>,
+ pub on_release: Option<ArrowChunkReleaseCallback>,
}
Expand description
Callbacks you can install on the ChunkBatcher
.
Fields§
§on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>
Called when a new row arrives.
The callback is given the slice of all rows not yet batched,
including the new one.
Used for testing.
-§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
-See [re_log_types::ArrowChunkReleaseCallback
] for more information.
+§on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
+See re_log_types::ArrowChunkReleaseCallback
for more information.
Implementations§
source§impl BatcherHooks
Trait Implementations§
source§impl Clone for BatcherHooks
source§fn clone(&self) -> BatcherHooks
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for BatcherHooks
source§impl Default for BatcherHooks
source§fn default() -> BatcherHooks
Returns the “default value” for a type. Read moreAuto Trait Implementations§
§impl Freeze for BatcherHooks
§impl !RefUnwindSafe for BatcherHooks
§impl Send for BatcherHooks
§impl Sync for BatcherHooks
§impl Unpin for BatcherHooks
§impl !UnwindSafe for BatcherHooks
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
index bac3fcce02aa4..2b12b627ebd14 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
index 4be8abe3ef9d3..17dcb20499e9e 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html
@@ -5,7 +5,7 @@
}Fields§
§tx_cmds: Sender<Command>
The one and only entrypoint into the pipeline: this is never cloned nor publicly exposed,
therefore the Drop
implementation is guaranteed that no more data can come in while it’s
running.
-§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> Any for Twhere
+§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
Struct re_chunk::batcher::BatcherHooks
source · pub struct BatcherHooks {
pub on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>,
- pub on_release: Option<ArrowChunkReleaseCallback>,
+ pub on_release: Option<ArrowChunkReleaseCallback>,
}
Expand description
Callbacks you can install on the ChunkBatcher
.
Fields§
§on_insert: Option<Arc<dyn Fn(&[PendingRow]) + Send + Sync>>
Called when a new row arrives.
The callback is given the slice of all rows not yet batched, including the new one.
Used for testing.
-on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
-See [re_log_types::ArrowChunkReleaseCallback
] for more information.
on_release: Option<ArrowChunkReleaseCallback>
Callback to be run when an Arrow Chunk goes out of scope.
+See re_log_types::ArrowChunkReleaseCallback
for more information.
Implementations§
source§impl BatcherHooks
impl BatcherHooks
Trait Implementations§
source§impl Clone for BatcherHooks
impl Clone for BatcherHooks
source§fn clone(&self) -> BatcherHooks
fn clone(&self) -> BatcherHooks
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for BatcherHooks
impl Debug for BatcherHooks
source§impl Default for BatcherHooks
impl Default for BatcherHooks
source§fn default() -> BatcherHooks
fn default() -> BatcherHooks
Auto Trait Implementations§
impl Freeze for BatcherHooks
impl !RefUnwindSafe for BatcherHooks
impl Send for BatcherHooks
impl Sync for BatcherHooks
impl Unpin for BatcherHooks
impl !UnwindSafe for BatcherHooks
Blanket Implementations§
source§impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
index bac3fcce02aa4..2b12b627ebd14 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
impl<T> Any for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
index bac3fcce02aa4..2b12b627ebd14 100644
--- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
§Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
pub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the batcher.
-pub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
pub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
pub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
pub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
pub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html index 4be8abe3ef9d3..17dcb20499e9e 100644 --- a/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html +++ b/docs/rust/head/re_chunk/batcher/struct.ChunkBatcherInner.html @@ -5,7 +5,7 @@ }Fields§
§tx_cmds: Sender<Command>
The one and only entrypoint into the pipeline: this is never cloned nor publicly exposed,
therefore the Drop
implementation is guaranteed that no more data can come in while it’s
running.
rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
impl ChunkBatcherInner
fn push_chunk(&self, chunk: Chunk)
fn push_row(&self, entity_path: EntityPath, row: PendingRow)
fn flush_async(&self)
fn flush_blocking(&self)
fn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
impl Freeze for ChunkBatcherInner
impl !RefUnwindSafe for ChunkBatcherInner
impl Send for ChunkBatcherInner
impl Sync for ChunkBatcherInner
impl Unpin for ChunkBatcherInner
impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> Any for Twhere
+§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
impl<T> Any for Twhere
+§rx_chunks: Option<Receiver<Chunk>>
§cmds_to_chunks_handle: Option<JoinHandle<()>>
Implementations§
source§impl ChunkBatcherInner
sourcefn push_chunk(&self, chunk: Chunk)
sourcefn push_row(&self, entity_path: EntityPath, row: PendingRow)
sourcefn flush_async(&self)
sourcefn flush_blocking(&self)
sourcefn send_cmd(&self, cmd: Command)
Trait Implementations§
Auto Trait Implementations§
§impl Freeze for ChunkBatcherInner
§impl !RefUnwindSafe for ChunkBatcherInner
§impl Send for ChunkBatcherInner
§impl Sync for ChunkBatcherInner
§impl Unpin for ChunkBatcherInner
§impl !UnwindSafe for ChunkBatcherInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkBatcherInner
impl ChunkBatcherInner
fn push_chunk(&self, chunk: Chunk)
fn push_row(&self, entity_path: EntityPath, row: PendingRow)
fn flush_async(&self)
fn flush_blocking(&self)
fn send_cmd(&self, cmd: Command)
impl Freeze for ChunkBatcherInner
impl !RefUnwindSafe for ChunkBatcherInner
impl Send for ChunkBatcherInner
impl Sync for ChunkBatcherInner
impl Unpin for ChunkBatcherInner
impl !UnwindSafe for ChunkBatcherInner
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
index e25f035d07ab8..0e561416399d9 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingRow.html
@@ -1,23 +1,23 @@
PendingRow in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
-§timepoint: TimePoint
User-specified [TimePoint
] for this event.
+§timepoint: TimePoint
User-specified TimePoint
for this event.
§components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
source§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
+) -> Selfsource§impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very
good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
diff --git a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
index 07d0848b0fb12..3ea02901b1427 100644
--- a/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
+++ b/docs/rust/head/re_chunk/batcher/struct.PendingTimeColumn.html
@@ -1,11 +1,11 @@
PendingTimeColumn in re_chunk::batcher - Rust Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
-Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
sourcefn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
§impl Freeze for PendingTimeColumn
§impl RefUnwindSafe for PendingTimeColumn
§impl Send for PendingTimeColumn
§impl Sync for PendingTimeColumn
§impl Unpin for PendingTimeColumn
§impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
Struct re_chunk::batcher::PendingRow
source · pub struct PendingRow {
pub row_id: RowId,
- pub timepoint: TimePoint,
+ pub timepoint: TimePoint,
pub components: BTreeMap<ComponentName, Box<dyn ArrowArray>>,
}
Expand description
A single row’s worth of data (i.e. a single log call).
Send those to the batcher to build up a Chunk
.
Fields§
§row_id: RowId
Auto-generated TUID
, uniquely identifying this event and keeping track of the client’s
wall-clock.
timepoint: TimePoint
User-specified [TimePoint
] for this event.
timepoint: TimePoint
User-specified TimePoint
for this event.
components: BTreeMap<ComponentName, Box<dyn ArrowArray>>
The component data.
Each array is a single component, i.e. not a list array.
Implementations§
source§impl PendingRow
impl PendingRow
source§impl PendingRow
impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
pub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
source§impl PendingRow
impl PendingRow
sourcepub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
pub fn into_chunk(self, entity_path: EntityPath) -> ChunkResult<Chunk>
Turn a single row into a Chunk
of its own.
That’s very wasteful, probably don’t do that outside of testing, or unless you have very good reasons too.
See also Self::many_into_chunks
.
sourcepub fn many_into_chunks(
- entity_path: EntityPath,
+ entity_path: EntityPath,
chunk_max_rows_if_unsorted: u64,
rows: Vec<Self>
) -> impl Iterator<Item = ChunkResult<Chunk>>
pub fn many_into_chunks( - entity_path: EntityPath, + entity_path: EntityPath, chunk_max_rows_if_unsorted: u64, rows: Vec<Self> ) -> impl Iterator<Item = ChunkResult<Chunk>>
This turns a batch of PendingRow
s into a Chunk
.
Struct re_chunk::batcher::PendingTimeColumn
source · struct PendingTimeColumn {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
is_sorted: bool,
- time_range: ResolvedTimeRange,
+ time_range: ResolvedTimeRange,
}
Expand description
Helper class used to buffer time data.
See PendingRow::many_into_chunks
for usage.
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
source§impl PendingTimeColumn
impl PendingTimeColumn
fn new(timeline: Timeline) -> Self
Fields§
§timeline: Timeline
§times: Vec<i64>
§is_sorted: bool
§time_range: ResolvedTimeRange
Implementations§
Auto Trait Implementations§
impl Freeze for PendingTimeColumn
impl RefUnwindSafe for PendingTimeColumn
impl Send for PendingTimeColumn
impl Sync for PendingTimeColumn
impl Unpin for PendingTimeColumn
impl UnwindSafe for PendingTimeColumn
Blanket Implementations§
source§impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
impl<T> Borrow<T> for Twhere
diff --git a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
index c8b1f7d923db4..3ec8f158f1882 100644
--- a/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk::builder - Rust Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
index b0abcc5ca6455..80d70dc20884c 100644
--- a/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
+++ b/docs/rust/head/re_chunk/builder/struct.TimeColumnBuilder.html
@@ -1,11 +1,11 @@
TimeColumnBuilder in re_chunk::builder - Rust Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
-Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
+Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
-sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
§impl Freeze for TimeColumnBuilder
§impl RefUnwindSafe for TimeColumnBuilder
§impl Send for TimeColumnBuilder
§impl Sync for TimeColumnBuilder
§impl Unpin for TimeColumnBuilder
§impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
Struct re_chunk::builder::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
pub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
pub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
pub fn with_sparse_row( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)> ) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
pub fn with_row( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)> ) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
pub fn with_archetype( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, as_components: &dyn AsComponents ) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
pub fn with_component_batch( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, component_batch: &dyn ComponentBatch ) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
pub fn with_component_batches<'a>( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch> ) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
pub fn with_sparse_component_batches<'a>( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)> ) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
pub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
Struct re_chunk::builder::TimeColumnBuilder
source · pub struct TimeColumnBuilder {
- timeline: Timeline,
+ timeline: Timeline,
times: Vec<i64>,
}
Expand description
Helper to incrementally build a TimeColumn
.
Can be created using TimeColumn::builder
.
Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
pub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
Fields§
§timeline: Timeline
§times: Vec<i64>
Implementations§
source§impl TimeColumnBuilder
impl TimeColumnBuilder
sourcepub fn new(timeline: Timeline) -> Self
pub fn new(timeline: Timeline) -> Self
Initializes a new TimeColumnBuilder
.
See also TimeColumn::builder
.
sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
pub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
+sourcepub fn with_row(&mut self, time: TimeInt) -> &mut Self
pub fn with_row(&mut self, time: TimeInt) -> &mut Self
Add a row’s worth of time data using the given timestamp.
sourcepub fn build(self) -> TimeColumn
pub fn build(self) -> TimeColumn
Builds and returns the final TimeColumn
.
Auto Trait Implementations§
impl Freeze for TimeColumnBuilder
impl RefUnwindSafe for TimeColumnBuilder
impl Send for TimeColumnBuilder
impl Sync for TimeColumnBuilder
impl Unpin for TimeColumnBuilder
impl UnwindSafe for TimeColumnBuilder
Blanket Implementations§
source§impl<T> Az for T
impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.Chunk.html b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
index ca7b64f91fec0..a131d32c2778a 100644
--- a/docs/rust/head/re_chunk/chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk::chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@ Its time columns might or might not be ascendingly sorted, depending on how the data was logged.This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the hot path (e.g. during garbage collection).
is_sorted: bool
Is the chunk as a whole sorted by RowId
?
row_ids: StructArray
The respective RowId
s for each row of data.
timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
Implementations§
source§impl Chunk
impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
pub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
pub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
Implementations§
source§impl Chunk
impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
pub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
pub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
pub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@sourcepub fn zeroed(self) -> Self
pub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
pub fn time_range_per_component( &self -) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
pub fn num_events_cumulative_per_unique_time( &self, - timeline: &Timeline -) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline +) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
fn num_events_cumulative_per_unique_time_sorted( &self, time_column: &TimeColumn -) -> Vec<(TimeInt, u64)>
fn num_events_cumulative_per_unique_time_unsorted( +) -> Vec<(TimeInt, u64)>
fn num_events_cumulative_per_unique_time_unsorted( &self, time_column: &TimeColumn -) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
pub fn num_events_for_component( +) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
pub fn num_events_for_component( &self, component_name: ComponentName ) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@This is crucial for indexing and queries to work properly.
source§impl Chunk
impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn new( id: ChunkId, - entity_path: EntityPath, + entity_path: EntityPath, is_sorted: Option<bool>, row_ids: ArrowStructArray, - timelines: BTreeMap<Timeline, TimeColumn>, + timelines: BTreeMap<Timeline, TimeColumn>, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn from_native_row_ids( id: ChunkId, - entity_path: EntityPath, + entity_path: EntityPath, is_sorted: Option<bool>, row_ids: &[RowId], - timelines: BTreeMap<Timeline, TimeColumn>, + timelines: BTreeMap<Timeline, TimeColumn>, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn from_auto_row_ids( id: ChunkId, - entity_path: EntityPath, - timelines: BTreeMap<Timeline, TimeColumn>, + entity_path: EntityPath, + timelines: BTreeMap<Timeline, TimeColumn>, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn new_static( id: ChunkId, - entity_path: EntityPath, + entity_path: EntityPath, is_sorted: Option<bool>, row_ids: ArrowStructArray, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
pub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
pub fn add_component( +
pub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
pub fn add_component( &mut self, component_name: ComponentName, list_array: ArrowListArray<i32> @@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
pub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
source§impl Chunk
impl Chunk
pub fn id(&self) -> ChunkId
pub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
pub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
impl Chunk
pub fn id(&self) -> ChunkId
pub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
pub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
pub fn num_controls(&self) -> usize
pub fn num_timelines(&self) -> usize
pub fn num_components(&self) -> usize
pub fn num_rows(&self) -> usize
pub fn is_empty(&self) -> bool
pub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
pub fn row_ids_raw( &self ) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
pub fn row_id_range(&self) -> Option<(RowId, RowId)>
pub fn is_static(&self) -> bool
pub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
pub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
pub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
pub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
pub fn is_static(&self) -> bool
pub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
pub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
pub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
pub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
pub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
pub fn component_batch_raw( @@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
pub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
pub fn iter_indices( &self, - timeline: &Timeline -) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
-
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
pub fn iter_component_indices( &self, - timeline: &Timeline, + timeline: &Timeline, component_name: &ComponentName -) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_component_timepoints( &self, component_name: &ComponentName -) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
source§impl Chunk
impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
pub fn iter_indices_owned( self: Arc<Self>, - timeline: &Timeline -) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
pub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
pub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
pub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
pub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
pub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
pub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
pub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
pub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
pub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
pub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
pub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
pub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique. - If the
Chunk
has been densified on a specific column, the resulting chunk will @@ -488,7 +488,7 @@WARNING: the returned chunk has the same old
crate::ChunkId
! Change it withSelf::with_id
.
source§impl Chunk
impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
pub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-pub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
impl Chunk
pub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
pub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
fn eq(&self, other: &Self) -> bool
self
and other
values to be equal, and is used
+pub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
impl Chunk
pub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
pub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
impl PartialEq for Chunk
source§impl SizeBytes for Chunk
impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
fn heap_size_bytes(&self) -> u64
self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
fn total_size_bytes(&self) -> u64
self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
fn stack_size_bytes(&self) -> u64
self
on the stack, in bytes. Read moreAuto Trait Implementations§
impl !Freeze for Chunk
impl !RefUnwindSafe for Chunk
impl Send for Chunk
impl Sync for Chunk
impl Unpin for Chunk
impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
index 096354a96c81a..649b5df4a168b 100644
--- a/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
+++ b/docs/rust/head/re_chunk/chunk/struct.TimeColumn.html
@@ -1,9 +1,9 @@
TimeColumn in re_chunk::chunk - Rust Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange,
+}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is
@@ -13,33 +13,33 @@
§is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
-§time_range: ResolvedTimeRange
The time range covered by Self::times
.
+§time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
-Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
+Implementations§
source§impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
Creates a new TimeColumn
of nanoseconds type.
-source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
source§impl TimeColumn
sourcepub fn timeline(&self) -> &Timeline
sourcepub fn name(&self) -> &str
sourcepub fn time_range(&self) -> ResolvedTimeRange
sourcepub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
sourcepub fn times_raw(&self) -> &[i64]
sourcepub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html
index 1e6939579aa95..26a1e02aa213f 100644
--- a/docs/rust/head/re_chunk/external/index.html
+++ b/docs/rust/head/re_chunk/external/index.html
@@ -1 +1 @@
-re_chunk::external - Rust
\ No newline at end of file
+re_chunk::external - Rust Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
index 07b4b878504f8..7b3a44fe382cc 100644
--- a/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
+++ b/docs/rust/head/re_chunk/helpers/struct.UnitChunkShared.html
@@ -1,5 +1,5 @@
UnitChunkShared in re_chunk::helpers - Rust Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
-Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
+Tuple Fields§
§0: ChunkShared
Implementations§
sourcepub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
@@ -38,7 +38,7 @@
Each row after that will be monotonically increasing.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -264,8 +264,8 @@
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -334,10 +334,10 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -354,7 +354,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -412,7 +412,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
Struct re_chunk::chunk::TimeColumn
source · pub struct TimeColumn {
- pub(crate) timeline: Timeline,
+ pub(crate) timeline: Timeline,
pub(crate) times: PrimitiveArray<i64>,
pub(crate) is_sorted: bool,
- pub(crate) time_range: ResolvedTimeRange,
-}
Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
+ pub(crate) time_range: ResolvedTimeRange, +}Fields§
§timeline: Timeline
§times: PrimitiveArray<i64>
Every single timestamp for this timeline.
- This might or might not be sorted, depending on how the data was logged.
- This is guaranteed to always be dense, because chunks are split anytime a timeline is @@ -13,33 +13,33 @@
is_sorted: bool
Is Self::times
sorted?
This is completely independent of Chunk::is_sorted
: a timeline doesn’t necessarily
follow the global RowId
-based order, although it does in most cases (happy path).
time_range: ResolvedTimeRange
The time range covered by Self::times
.
time_range: ResolvedTimeRange
The time range covered by Self::times
.
Not necessarily contiguous! Just the min and max value found in Self::times
.
Implementations§
source§impl TimeColumn
impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
pub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
Implementations§
source§impl TimeColumn
impl TimeColumn
sourcepub fn builder(timeline: Timeline) -> TimeColumnBuilder
pub fn builder(timeline: Timeline) -> TimeColumnBuilder
Initializes a new TimeColumnBuilder
.
source§impl TimeColumn
impl TimeColumn
sourcepub fn new(
is_sorted: Option<bool>,
- timeline: Timeline,
+ timeline: Timeline,
times: ArrowPrimitiveArray<i64>
) -> Self
pub fn new( is_sorted: Option<bool>, - timeline: Timeline, + timeline: Timeline, times: ArrowPrimitiveArray<i64> ) -> Self
Creates a new TimeColumn
.
Iff you know for sure whether the data is already appropriately sorted or not, specify is_sorted
.
When left unspecified (None
), it will be computed in O(n) time.
For a row-oriented constructor, see Self::builder
.
sourcepub fn new_sequence(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
pub fn new_sequence( - name: impl Into<TimelineName>, + name: impl Into<TimelineName>, times: impl IntoIterator<Item = impl Into<i64>> ) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_seconds(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<f64>>
) -> Self
pub fn new_seconds( - name: impl Into<TimelineName>, + name: impl Into<TimelineName>, times: impl IntoIterator<Item = impl Into<f64>> ) -> Self
Creates a new TimeColumn
of sequence type.
sourcepub fn new_nanos(
- name: impl Into<TimelineName>,
+ name: impl Into<TimelineName>,
times: impl IntoIterator<Item = impl Into<i64>>
) -> Self
pub fn new_nanos( - name: impl Into<TimelineName>, + name: impl Into<TimelineName>, times: impl IntoIterator<Item = impl Into<i64>> ) -> Self
Creates a new TimeColumn
of nanoseconds type.
source§impl TimeColumn
impl TimeColumn
pub fn timeline(&self) -> &Timeline
pub fn name(&self) -> &str
pub fn time_range(&self) -> ResolvedTimeRange
pub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
pub fn times_raw(&self) -> &[i64]
pub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
pub fn num_rows(&self) -> usize
pub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
+
pub fn time_range_per_component( +
source§impl TimeColumn
impl TimeColumn
pub fn timeline(&self) -> &Timeline
pub fn name(&self) -> &str
pub fn time_range(&self) -> ResolvedTimeRange
pub fn times_array(&self) -> &ArrowPrimitiveArray<i64>
pub fn times_raw(&self) -> &[i64]
pub fn times(&self) -> impl DoubleEndedIterator<Item = TimeInt> + '_
pub fn num_rows(&self) -> usize
pub fn is_empty(&self) -> bool
sourcepub fn time_range_per_component(
&self,
components: &BTreeMap<ComponentName, ArrowListArray<i32>>
-) -> BTreeMap<ComponentName, ResolvedTimeRange>
pub fn time_range_per_component( &self, components: &BTreeMap<ComponentName, ArrowListArray<i32>> -) -> BTreeMap<ComponentName, ResolvedTimeRange>
Computes the time range covered by each individual component column.
+) -> BTreeMap<ComponentName, ResolvedTimeRange>Computes the time range covered by each individual component column.
This is different from the time range covered by the TimeColumn
as a whole
because component columns are potentially sparse.
This is crucial for indexing and queries to work properly.
diff --git a/docs/rust/head/re_chunk/external/index.html b/docs/rust/head/re_chunk/external/index.html index 1e6939579aa95..26a1e02aa213f 100644 --- a/docs/rust/head/re_chunk/external/index.html +++ b/docs/rust/head/re_chunk/external/index.html @@ -1 +1 @@ -Re-exports§
pub use arrow2;
pub use re_log_types;
pub use crossbeam;
Struct re_chunk::helpers::UnitChunkShared
source · pub struct UnitChunkShared(ChunkShared);
Expand description
A ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Tuple Fields§
§0: ChunkShared
Implementations§
pub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
pub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Tuple Fields§
§0: ChunkShared
Implementations§
pub fn into_chunk(self) -> ChunkShared
sourcepub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
pub fn index(&self, timeline: &Timeline) -> Option<(TimeInt, RowId)>
Returns the index ((TimeInt, RowId)
pair) of the single row within, on the given timeline.
Returns the single static index if the chunk is static.
sourcepub fn row_id(&self) -> Option<RowId>
pub fn row_id(&self) -> Option<RowId>
Returns the RowId
of the single row within, on the given timeline.
Returns the single static RowId
if the chunk is static.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
pub fn time_range_per_component( &self -) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -46,8 +46,8 @@I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
pub fn num_events_cumulative_per_unique_time( &self, - timeline: &Timeline -) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline +) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -55,10 +55,10 @@
fn num_events_cumulative_per_unique_time_sorted( &self, time_column: &TimeColumn -) -> Vec<(TimeInt, u64)>
fn num_events_cumulative_per_unique_time_unsorted( +) -> Vec<(TimeInt, u64)>
fn num_events_cumulative_per_unique_time_unsorted( &self, time_column: &TimeColumn -) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@
This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -84,8 +84,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
pub fn num_events_for_component( +) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
pub fn num_events_for_component( &self, component_name: ComponentName ) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -69,7 +69,7 @@This is different from the RowId
range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
-pub fn id(&self) -> ChunkId
pub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
pub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+pub fn id(&self) -> ChunkId
pub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
pub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
pub fn num_controls(&self) -> usize
pub fn num_timelines(&self) -> usize
pub fn num_components(&self) -> usize
pub fn num_rows(&self) -> usize
pub fn is_empty(&self) -> bool
pub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
pub fn row_ids_raw( &self ) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
pub fn row_id_range(&self) -> Option<(RowId, RowId)>
pub fn is_static(&self) -> bool
pub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
pub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
pub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
pub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
pub fn is_static(&self) -> bool
pub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
pub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
pub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
pub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
sourcepub fn sanity_check(&self) -> ChunkResult<()>
pub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
sourcepub fn component_batch_raw(
@@ -129,8 +129,8 @@
pub fn component_batch_raw( @@ -129,8 +129,8 @@
sourcepub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
pub fn to_unit(self: &ChunkShared) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
pub fn iter_indices( &self, - timeline: &Timeline -) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
-
@@ -139,15 +139,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
pub fn iter_component_indices( &self, - timeline: &Timeline, + timeline: &Timeline, component_name: &ComponentName -) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -155,7 +155,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_component_timepoints( &self, component_name: &ComponentName -) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
pub fn iter_indices_owned( self: Arc<Self>, - timeline: &Timeline -) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
pub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
pub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
pub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
pub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
pub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
pub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
pub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -366,7 +366,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
pub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
pub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -388,7 +388,7 @@
sourcepub fn emptied(&self) -> Self
pub fn emptied(&self) -> Self
Empties the Chunk
vertically.
The result is a new Chunk
with the same columns but zero rows.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
pub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
pub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique. - If the
Chunk
has been densified on a specific column, the resulting chunk will @@ -412,7 +412,7 @@WARNING: the returned chunk has the same old
crate::ChunkId
! Change it withSelf::with_id
.
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
pub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-pub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
fn clone(&self) -> UnitChunkShared
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§fn heap_size_bytes(&self) -> u64
fn heap_size_bytes(&self) -> u64
self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
fn total_size_bytes(&self) -> u64
self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
fn stack_size_bytes(&self) -> u64
self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
pub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§fn clone(&self) -> UnitChunkShared
fn clone(&self) -> UnitChunkShared
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§fn heap_size_bytes(&self) -> u64
fn heap_size_bytes(&self) -> u64
self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
fn total_size_bytes(&self) -> u64
self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
fn stack_size_bytes(&self) -> u64
self
on the stack, in bytes. Read moreAuto Trait Implementations§
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
impl<T> BorrowMut<T> for Twhere
diff --git a/docs/rust/head/re_chunk/id/struct.ChunkId.html b/docs/rust/head/re_chunk/id/struct.ChunkId.html
index 0083e9316d555..d1ce4d7584a3f 100644
--- a/docs/rust/head/re_chunk/id/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/id/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk::id - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
@@ -25,24 +25,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for ChunkId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
§<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
RowId
s in these cases.Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
impl ChunkId
pub const ZERO: Self = _
pub const MAX: Self = _
Tuple Fields§
§0: Tuid
Implementations§
source§impl ChunkId
impl ChunkId
pub const ZERO: Self = _
pub const MAX: Self = _
sourcepub fn next(&self) -> Self
pub fn next(&self) -> Self
Returns the next logical ChunkId
.
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
§<
Beware: wrong usage can easily lead to conflicts.
Prefer ChunkId::new
when unsure.
ChunkId::new
when unsure.sourcepub fn nanoseconds_since_epoch(&self) -> u64
pub fn nanoseconds_since_epoch(&self) -> u64
When the ChunkId
was created, in nanoseconds since unix epoch.
pub fn from_u128(id: u128) -> Self
pub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
pub fn next(&self) -> Tuid
pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Tuid
pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
pub fn incremented_by(&self, n: u64) -> Tuid
pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
sourcepub fn short_string(&self) -> String
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
Tuid::new
] when unsure.
+pub fn short_string(&self) -> String
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
Trait Implementations§
source§impl<'de> Deserialize<'de> for ChunkId
impl<'de> Deserialize<'de> for ChunkId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
source§impl Loggable for ChunkId
impl Loggable for ChunkId
type Name = ComponentName
source§fn name() -> Self::Name
fn name() -> Self::Name
rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
fn arrow_datatype() -> DataType
arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/id/struct.RowId.html b/docs/rust/head/re_chunk/id/struct.RowId.html
index d46e2e01e6cc5..85e649b29242c 100644
--- a/docs/rust/head/re_chunk/id/struct.RowId.html
+++ b/docs/rust/head/re_chunk/id/struct.RowId.html
@@ -1,4 +1,4 @@
-RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+RowId in re_chunk::id - Rust pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
@@ -24,8 +24,8 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
sourcepub const ZERO: Self = _
sourcepub const MAX: Self = _
sourcepub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
@@ -34,24 +34,24 @@ §<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
-sourcepub fn from_u128(id: u128) -> Self
sourcepub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
-sourcepub const ZERO: Tuid = _
sourcepub const MAX: Tuid = _
sourcepub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
-pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
+pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
-This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
+pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
+This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
-sourcepub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
-Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
+Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read moresource§impl Loggable for RowId
§type Name = ComponentName
source§fn name() -> Self::Name
The fully-qualified name of this loggable, e.g. rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
The underlying [arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
+Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s into Chunk
s based upon
the thresholds defined in the associated ChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by calling Chunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through
the use of a crate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
\ No newline at end of file
diff --git a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
index ac19137381894..55bcea92ca343 100644
--- a/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
+++ b/docs/rust/head/re_chunk/iter/struct.ChunkIndicesIter.html
@@ -2,7 +2,7 @@
chunk: Arc<Chunk>,
time_column: Option<TimeColumn>,
index: usize,
-}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
+pub struct RowId(pub(crate) Tuid);
Expand description
A unique ID for a row’s worth of data within a chunk.
There is no relationship whatsoever between a ChunkId
and the RowId
s within that chunk.
§Uniqueness
Duplicated RowId
s within a single recording is considered undefined behavior.
§<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
RowId
s in these cases.Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
impl RowId
pub const ZERO: Self = _
pub const MAX: Self = _
pub fn from_tuid(tuid: Tuid) -> Self
Tuple Fields§
§0: Tuid
Implementations§
source§impl RowId
impl RowId
pub const ZERO: Self = _
pub const MAX: Self = _
pub fn from_tuid(tuid: Tuid) -> Self
sourcepub fn next(&self) -> Self
pub fn next(&self) -> Self
Returns the next logical RowId
.
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Self
pub fn incremented_by(&self, n: u64) -> Self
Returns the n
-next logical RowId
.
§<
Beware: wrong usage can easily lead to conflicts.
Prefer RowId::new
when unsure.
RowId::new
when unsure.sourcepub fn nanoseconds_since_epoch(&self) -> u64
pub fn nanoseconds_since_epoch(&self) -> u64
When the RowId
was created, in nanoseconds since unix epoch.
pub fn from_u128(id: u128) -> Self
pub fn as_u128(&self) -> u128
Methods from Deref<Target = Tuid>§
Methods from Deref<Target = Tuid>§
pub fn heap_size_bytes(&self) -> u64
pub fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.
NOTE: This crate cannot depend on re_types_core
, therefore the actual implementation of
SizeBytes for Tuid
lives in re_types_core
and calls this method.
pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
sourcepub fn nanoseconds_since_epoch(&self) -> u64
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
-The upper 64 bits of the Tuid
.
pub const ZERO: Tuid = _
pub const MAX: Tuid = _
pub fn as_u128(&self) -> u128
pub fn nanoseconds_since_epoch(&self) -> u64
pub fn nanoseconds_since_epoch(&self) -> u64
Approximate nanoseconds since unix epoch.
+The upper 64 bits of the [Tuid
].
pub fn next(&self) -> Tuid
pub fn next(&self) -> Tuid
Returns the next logical [Tuid
].
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
sourcepub fn incremented_by(&self, n: u64) -> Tuid
pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical Tuid
.
This is equivalent to calling Tuid::next
n
times.
+Prefer [Tuid::new
] when unsure.
pub fn incremented_by(&self, n: u64) -> Tuid
pub fn incremented_by(&self, n: u64) -> Tuid
Returns the n
-next logical [Tuid
].
This is equivalent to calling [Tuid::next
] n
times.
Wraps the monotonically increasing back to zero on overflow.
Beware: wrong usage can easily lead to conflicts.
-Prefer Tuid::new
when unsure.
sourcepub fn short_string(&self) -> String
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
+Prefer [Tuid::new
] when unsure.
+
Tuid::new
] when unsure.
+pub fn short_string(&self) -> String
pub fn short_string(&self) -> String
A shortened string representation of the Tuid
.
Trait Implementations§
source§impl<'de> Deserialize<'de> for RowId
impl<'de> Deserialize<'de> for RowId
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
source§impl Loggable for RowId
impl Loggable for RowId
type Name = ComponentName
source§fn name() -> Self::Name
fn name() -> Self::Name
rerun.datatypes.Vec2D
.source§fn arrow_datatype() -> DataType
fn arrow_datatype() -> DataType
arrow2::datatypes::DataType
], excluding datatype extensions.source§fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.
arrow
— Enable conversion to and from arrow-rs types
-
fn to_arrow_opt<'a>(
diff --git a/docs/rust/head/re_chunk/index.html b/docs/rust/head/re_chunk/index.html
index cf18216cce11d..9b0b0ab612c00 100644
--- a/docs/rust/head/re_chunk/index.html
+++ b/docs/rust/head/re_chunk/index.html
@@ -4,6 +4,6 @@ §Feature
serde
— Enable (de)serialization using serde.arrow
— Enable conversion to and from arrow-rs typesRe-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s intoChunk
s based upon +Re-exports§
pub use arrow2::array::Array as ArrowArray;
pub use re_log_types::EntityPath;
pub use re_log_types::TimeInt;
pub use re_log_types::TimePoint;
pub use re_log_types::Timeline;
pub use re_log_types::TimelineName;
pub use re_types_core::ComponentName;
Modules§
Structs§
- Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
- Implements an asynchronous batcher that coalesces
PendingRow
s intoChunk
s based upon the thresholds defined in the associatedChunkBatcherConfig
. - Defines the different thresholds of the associated
ChunkBatcher
. - Helper to incrementally build a
Chunk
. - The actual iterator implementation for
Chunk::iter_component
. - The underlying item type for
ChunkComponentIter
. - A unique ID for a
crate::Chunk
. - A query at a given time, for a given timeline.
- A single row’s worth of data (i.e. a single log call).
- A query over a time range, for a given timeline.
- A unique ID for a row’s worth of data within a chunk.
- Helper to incrementally build a
TimeColumn
. - A
Chunk
that is ready for transport. Obtained by callingChunk::to_transport
. - A
ChunkShared
that is guaranteed to always contain a single row’s worth of data.
Enums§
- Errors that can occur when creating/manipulating a
ChunkBatcher
. - Errors that can occur when creating/manipulating a
Chunk
s, directly or indirectly through the use of acrate::ChunkBatcher
.
Type Aliases§
- A simple type alias for an
Arc<Chunk>
.
Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
fn next(&mut self) -> Option<Self::Item>
source§fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
fn next_chunk<const N: usize>(
+}Fields§
§chunk: Arc<Chunk>
§time_column: Option<TimeColumn>
§index: usize
Trait Implementations§
source§impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
Advances the iterator and returns the next value. Read moresource§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
🔬This is a nightly-only experimental API. (iter_next_chunk
)Advances the iterator and returns an array containing the next N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
Returns the bounds on the remaining length of the iterator. Read more1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
source§impl Iterator for ChunkIndicesIter
impl Iterator for ChunkIndicesIter
source§fn next(&mut self) -> Option<Self::Item>
fn next(&mut self) -> Option<Self::Item>
source§fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
fn next_chunk<const N: usize>(
&mut self
) -> Result<[Self::Item; N], IntoIter<Self::Item, N>>where
Self: Sized,
iter_next_chunk
)N
values. Read more1.0.0 · source§fn size_hint(&self) -> (usize, Option<usize>)
fn size_hint(&self) -> (usize, Option<usize>)
1.0.0 · source§fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
fn count(self) -> usizewhere
diff --git a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
index 768ae11c280af..51e92cb84095e 100644
--- a/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
+++ b/docs/rust/head/re_chunk/latest_at/struct.LatestAtQuery.html
@@ -1,10 +1,10 @@
LatestAtQuery in re_chunk::latest_at - Rust Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
RangeQuery in re_chunk::range - Rust Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the
interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
-sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
+Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
+sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
+sourcepub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
-sourcepub fn timeline(&self) -> Timeline
sourcepub fn range(&self) -> ResolvedTimeRange
sourcepub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
Returns a copy of the value. Read more1.0.0 · source§fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read moresource§impl Debug for RangeQuery
source§impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
This method tests for !=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
Chunk in re_chunk - Rust pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@
Its time columns might or might not be ascendingly sorted, depending on how the data was logged.
This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the
hot path (e.g. during garbage collection).
§is_sorted: bool
Is the chunk as a whole sorted by RowId
?
§row_ids: StructArray
The respective RowId
s for each row of data.
-§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+§timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
§components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
-Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
-sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+Implementations§
source§impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
+sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@
sourcepub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@
I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline
+) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
sourcefn num_events_cumulative_per_unique_time_sorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
+) -> Vec<(TimeInt, u64)>
sourcefn num_events_cumulative_per_unique_time_unsorted(
&self,
time_column: &TimeColumn
-) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
source§impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
-sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
sourcepub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
-source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
sourcepub fn id(&self) -> ChunkId
sourcepub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
sourcepub fn num_controls(&self) -> usize
sourcepub fn num_timelines(&self) -> usize
sourcepub fn num_components(&self) -> usize
sourcepub fn num_rows(&self) -> usize
sourcepub fn is_empty(&self) -> bool
sourcepub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
@@ -147,8 +147,8 @@
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
+sourcepub fn is_static(&self) -> bool
sourcepub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
sourcepub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
sourcepub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
-sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
+sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
+) -> impl Iterator<Item = TimePoint> + '_Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
@@ -328,8 +328,8 @@
source§impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
+ timeline: &Timeline
+) -> impl Iterator<Item = (TimeInt, RowId)>Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
@@ -398,12 +398,12 @@
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
+sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
@@ -426,7 +426,7 @@
run out of bounds.
This can result in an empty Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
+sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
+sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
-sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique.
- If the
Chunk
has been densified on a specific column, the resulting chunk will
@@ -488,7 +488,7 @@
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
source§impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
This method tests for self
and other
values to be equal, and is used
+sourcepub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
sourcepub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
sourcepub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
source§impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
Returns the total size of self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
Returns the total size of self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
Returns the total size of self
on the stack, in bytes. Read moreAuto Trait Implementations§
§impl !Freeze for Chunk
§impl !RefUnwindSafe for Chunk
§impl Send for Chunk
§impl Sync for Chunk
§impl Unpin for Chunk
§impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the
batcher.
-sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
+sourcepub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
index 46520c9999ae8..d466ec0cc3330 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html
@@ -1,47 +1,47 @@
ChunkBuilder in re_chunk - Rust Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
-Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
diff --git a/docs/rust/head/re_chunk/struct.ChunkId.html b/docs/rust/head/re_chunk/struct.ChunkId.html
index ec8d274b07f74..e84c48265705d 100644
--- a/docs/rust/head/re_chunk/struct.ChunkId.html
+++ b/docs/rust/head/re_chunk/struct.ChunkId.html
@@ -1,4 +1,4 @@
-ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
+ChunkId in re_chunk - Rust pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
@@ -15,7 +15,7 @@ §<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
Tuple Fields§
§0: Tuid
Implementations§
Struct re_chunk::latest_at::LatestAtQuery
source · pub struct LatestAtQuery {
- timeline: Timeline,
- at: TimeInt,
+ timeline: Timeline,
+ at: TimeInt,
}
Expand description
A query at a given time, for a given timeline.
Get the latest version of the data available at this time.
-Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
fn clone(&self) -> LatestAtQuery
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for LatestAtQuery
impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
impl Hash for LatestAtQuery
1.3.0 · source§fn hash_slice<H>(data: &[Self], state: &mut H)where
+
fn hash_slice<H>(data: &[Self], state: &mut H)where
+
Fields§
§timeline: Timeline
§at: TimeInt
Implementations§
Trait Implementations§
source§impl Clone for LatestAtQuery
impl Clone for LatestAtQuery
source§fn clone(&self) -> LatestAtQuery
fn clone(&self) -> LatestAtQuery
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for LatestAtQuery
impl Debug for LatestAtQuery
source§impl Hash for LatestAtQuery
impl Hash for LatestAtQuery
source§impl PartialEq for LatestAtQuery
impl PartialEq for LatestAtQuery
source§fn eq(&self, other: &LatestAtQuery) -> bool
fn eq(&self, other: &LatestAtQuery) -> bool
self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
fn ne(&self, other: &Rhs) -> bool
!=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/range/struct.RangeQuery.html b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
index 8c0b3c6537ac6..8603fc90c0aa3 100644
--- a/docs/rust/head/re_chunk/range/struct.RangeQuery.html
+++ b/docs/rust/head/re_chunk/range/struct.RangeQuery.html
@@ -1,18 +1,18 @@
Struct re_chunk::range::RangeQuery
source · pub struct RangeQuery {
- pub timeline: Timeline,
- pub range: ResolvedTimeRange,
+ pub timeline: Timeline,
+ pub range: ResolvedTimeRange,
pub options: RangeQueryOptions,
}
Expand description
A query over a time range, for a given timeline.
Get all the data within this time interval, plus the latest one before the start of the interval.
Motivation: all data is considered alive until the next logging to the same component path.
-Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
pub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
pub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include [TimeInt::STATIC
].
Fields§
§timeline: Timeline
§range: ResolvedTimeRange
§options: RangeQueryOptions
Implementations§
source§impl RangeQuery
impl RangeQuery
sourcepub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
pub const fn new(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
sourcepub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
pub const fn with_extras(timeline: Timeline, range: ResolvedTimeRange) -> Self
The returned query is guaranteed to never include TimeInt::STATIC
.
Keeps all extra timelines and components around.
-pub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
pub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
pub const fn everything(timeline: Timeline) -> Self
sourcepub fn keep_extra_timelines(self, toggle: bool) -> Self
pub fn keep_extra_timelines(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_timelines
for more information.
sourcepub fn keep_extra_components(self, toggle: bool) -> Self
pub fn keep_extra_components(self, toggle: bool) -> Self
See RangeQueryOptions::keep_extra_components
for more information.
sourcepub fn include_extended_bounds(self, toggle: bool) -> Self
pub fn include_extended_bounds(self, toggle: bool) -> Self
See RangeQueryOptions::include_extended_bounds
for more information.
pub fn timeline(&self) -> Timeline
pub fn range(&self) -> ResolvedTimeRange
pub fn options(&self) -> RangeQueryOptions
Trait Implementations§
source§impl Clone for RangeQuery
impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
fn clone(&self) -> RangeQuery
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for RangeQuery
impl Debug for RangeQuery
Trait Implementations§
source§impl Clone for RangeQuery
impl Clone for RangeQuery
source§fn clone(&self) -> RangeQuery
fn clone(&self) -> RangeQuery
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for RangeQuery
impl Debug for RangeQuery
source§impl Hash for RangeQuery
impl Hash for RangeQuery
source§impl PartialEq for RangeQuery
impl PartialEq for RangeQuery
source§fn eq(&self, other: &RangeQuery) -> bool
fn eq(&self, other: &RangeQuery) -> bool
self
and other
values to be equal, and is used
by ==
.1.0.0 · source§fn ne(&self, other: &Rhs) -> bool
fn ne(&self, other: &Rhs) -> bool
!=
. The default implementation is almost always
diff --git a/docs/rust/head/re_chunk/struct.Chunk.html b/docs/rust/head/re_chunk/struct.Chunk.html
index 589b03035b440..b224f3254ed24 100644
--- a/docs/rust/head/re_chunk/struct.Chunk.html
+++ b/docs/rust/head/re_chunk/struct.Chunk.html
@@ -1,10 +1,10 @@
pub struct Chunk {
pub(crate) id: ChunkId,
- pub(crate) entity_path: EntityPath,
+ pub(crate) entity_path: EntityPath,
pub(crate) heap_size_bytes: AtomicU64,
pub(crate) is_sorted: bool,
pub(crate) row_ids: StructArray,
- pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
+ pub(crate) timelines: BTreeMap<Timeline, TimeColumn>,
pub(crate) components: BTreeMap<ComponentName, ListArray<i32>>,
}
Expand description
Dense arrow-based storage of N rows of multi-component multi-temporal data for a specific entity.
This is our core datastructure for logging, storing, querying and transporting data around.
@@ -12,19 +12,19 @@ Its time columns might or might not be ascendingly sorted, depending on how the data was logged.This is the in-memory representation of a chunk, optimized for efficient manipulation of the
data within. For transport, see crate::TransportChunk
instead.
Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
+Fields§
§id: ChunkId
§entity_path: EntityPath
§heap_size_bytes: AtomicU64
The heap size of this chunk in bytes.
Must be cached as it is very costly to compute, and needs to be computed repeatedly on the hot path (e.g. during garbage collection).
is_sorted: bool
Is the chunk as a whole sorted by RowId
?
row_ids: StructArray
The respective RowId
s for each row of data.
timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
+timelines: BTreeMap<Timeline, TimeColumn>
The time columns.
Each column must be the same length as row_ids
.
Empty if this is a static chunk.
components: BTreeMap<ComponentName, ListArray<i32>>
A sparse ListArray
for each component.
Each ListArray
must be the same length as row_ids
.
Sparse so that we can e.g. log a Position
at one timestamp but not a Color
.
Implementations§
source§impl Chunk
impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
pub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
pub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
Implementations§
source§impl Chunk
impl Chunk
sourcepub fn builder(entity_path: EntityPath) -> ChunkBuilder
pub fn builder(entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
sourcepub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
pub fn builder_with_id(id: ChunkId, entity_path: EntityPath) -> ChunkBuilder
Initializes a new ChunkBuilder
.
The final Chunk
will have the specified id
.
source§impl Chunk
impl Chunk
sourcepub fn with_id(self, id: ChunkId) -> Self
pub fn with_id(self, id: ChunkId) -> Self
Returns a version of us with a new ChunkId
.
Reminder:
@@ -47,7 +47,7 @@sourcepub fn zeroed(self) -> Self
pub fn zeroed(self) -> Self
Clones the chunk into a new chunk where all RowId
s are RowId::ZERO
.
sourcepub fn time_range_per_component(
&self
-) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
pub fn time_range_per_component( &self -) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>
Computes the time range covered by each individual component column on each timeline.
+) -> BTreeMap<Timeline, BTreeMap<ComponentName, ResolvedTimeRange>>Computes the time range covered by each individual component column on each timeline.
This is different from the time range covered by the Chunk
as a whole because component
columns are potentially sparse.
This is crucial for indexing and queries to work properly.
@@ -55,8 +55,8 @@I.e. how many component batches (“cells”) were logged in total?
sourcepub fn num_events_cumulative_per_unique_time(
&self,
- timeline: &Timeline
-) -> Vec<(TimeInt, u64)>
pub fn num_events_cumulative_per_unique_time( &self, - timeline: &Timeline -) -> Vec<(TimeInt, u64)>
The cumulative number of events in this chunk for each unique timestamp.
+ timeline: &Timeline +) -> Vec<(TimeInt, u64)>The cumulative number of events in this chunk for each unique timestamp.
I.e. how many component batches (“cells”) were logged in total at each timestamp?
Keep in mind that a timestamp can appear multiple times in a Chunk
.
This method will do a sum accumulation to account for these cases (i.e. every timestamp in
@@ -64,10 +64,10 @@
fn num_events_cumulative_per_unique_time_sorted( &self, time_column: &TimeColumn -) -> Vec<(TimeInt, u64)>
fn num_events_cumulative_per_unique_time_unsorted( +) -> Vec<(TimeInt, u64)>
fn num_events_cumulative_per_unique_time_unsorted( &self, time_column: &TimeColumn -) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
+) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@
This is crucial for indexing and queries to work properly.
pub fn num_events_for_component( +) -> Vec<(TimeInt, u64)>
sourcepub fn num_events_for_component(
&self,
component_name: ComponentName
) -> Option<u64>
pub fn num_events_for_component( &self, component_name: ComponentName ) -> Option<u64>
The number of events in this chunk for the specified component.
@@ -80,10 +80,10 @@This is crucial for indexing and queries to work properly.
source§impl Chunk
impl Chunk
sourcepub fn new(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn new( id: ChunkId, - entity_path: EntityPath, + entity_path: EntityPath, is_sorted: Option<bool>, row_ids: ArrowStructArray, - timelines: BTreeMap<Timeline, TimeColumn>, + timelines: BTreeMap<Timeline, TimeColumn>, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -93,10 +93,10 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_native_row_ids(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: &[RowId],
- timelines: BTreeMap<Timeline, TimeColumn>,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn from_native_row_ids( id: ChunkId, - entity_path: EntityPath, + entity_path: EntityPath, is_sorted: Option<bool>, row_ids: &[RowId], - timelines: BTreeMap<Timeline, TimeColumn>, + timelines: BTreeMap<Timeline, TimeColumn>, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -106,8 +106,8 @@
For a row-oriented constructor, see Self::builder
.
sourcepub fn from_auto_row_ids(
id: ChunkId,
- entity_path: EntityPath,
- timelines: BTreeMap<Timeline, TimeColumn>,
+ entity_path: EntityPath,
+ timelines: BTreeMap<Timeline, TimeColumn>,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn from_auto_row_ids( id: ChunkId, - entity_path: EntityPath, - timelines: BTreeMap<Timeline, TimeColumn>, + entity_path: EntityPath, + timelines: BTreeMap<Timeline, TimeColumn>, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Creates a new Chunk
.
This will fail if the passed in data is malformed in any way – see Self::sanity_check
@@ -116,13 +116,13 @@
row in the chunk.
sourcepub fn new_static(
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
is_sorted: Option<bool>,
row_ids: ArrowStructArray,
components: BTreeMap<ComponentName, ArrowListArray<i32>>
) -> ChunkResult<Self>
pub fn new_static( id: ChunkId, - entity_path: EntityPath, + entity_path: EntityPath, is_sorted: Option<bool>, row_ids: ArrowStructArray, components: BTreeMap<ComponentName, ArrowListArray<i32>> ) -> ChunkResult<Self>
Simple helper for Self::new
for static data.
For a row-oriented constructor, see Self::builder
.
pub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
+
pub fn add_component( +
pub fn empty(id: ChunkId, entity_path: EntityPath) -> Self
sourcepub fn add_component(
&mut self,
component_name: ComponentName,
list_array: ArrowListArray<i32>
@@ -132,7 +132,7 @@
pub fn add_component( &mut self, component_name: ComponentName, list_array: ArrowListArray<i32> @@ -132,7 +132,7 @@
sourcepub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
pub fn add_timeline(&mut self, chunk_timeline: TimeColumn) -> ChunkResult<()>
Unconditionally inserts a TimeColumn
.
Removes and replaces the column if it already exists.
This will fail if the end result is malformed in any way – see Self::sanity_check
.
source§impl Chunk
impl Chunk
pub fn id(&self) -> ChunkId
pub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
pub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
+source§impl Chunk
impl Chunk
pub fn id(&self) -> ChunkId
pub fn entity_path(&self) -> &EntityPath
sourcepub fn num_columns(&self) -> usize
pub fn num_columns(&self) -> usize
How many columns in total? Includes control, time, and component columns.
pub fn num_controls(&self) -> usize
pub fn num_timelines(&self) -> usize
pub fn num_components(&self) -> usize
pub fn num_rows(&self) -> usize
pub fn is_empty(&self) -> bool
pub fn row_ids_array(&self) -> &ArrowStructArray
sourcepub fn row_ids_raw(
&self
) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
pub fn row_ids_raw( &self ) -> (&ArrowPrimitiveArray<u64>, &ArrowPrimitiveArray<u64>)
Returns the RowId
s in their raw-est form: a tuple of (times, counters) arrays.
sourcepub fn row_id_range(&self) -> Option<(RowId, RowId)>
pub fn row_id_range(&self) -> Option<(RowId, RowId)>
pub fn is_static(&self) -> bool
pub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
pub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
pub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
pub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
-and returns the corresponding [TimePoint
].
pub fn is_static(&self) -> bool
pub fn timelines(&self) -> &BTreeMap<Timeline, TimeColumn>
pub fn component_names(&self) -> impl Iterator<Item = ComponentName> + '_
pub fn components(&self) -> &BTreeMap<ComponentName, ArrowListArray<i32>>
sourcepub fn timepoint_max(&self) -> TimePoint
pub fn timepoint_max(&self) -> TimePoint
Computes the maximum value for each and every timeline present across this entire chunk,
+and returns the corresponding TimePoint
.
source§impl Chunk
impl Chunk
sourcepub fn sanity_check(&self) -> ChunkResult<()>
pub fn sanity_check(&self) -> ChunkResult<()>
Returns an error if the Chunk’s invariants are not upheld.
Costly checks are only run in debug builds.
source§impl Chunk
impl Chunk
sourcepub fn component_batch_raw(
@@ -193,8 +193,8 @@
pub fn component_batch_raw( @@ -193,8 +193,8 @@
sourcepub fn into_unit(self) -> Option<UnitChunkShared>
pub fn into_unit(self) -> Option<UnitChunkShared>
Turns the chunk into a UnitChunkShared
, if possible.
source§impl Chunk
impl Chunk
sourcepub fn iter_indices(
&self,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
pub fn iter_indices( &self, - timeline: &Timeline -) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
See also:
-
@@ -203,15 +203,15 @@
sourcepub fn iter_component_indices(
&self,
- timeline: &Timeline,
+ timeline: &Timeline,
component_name: &ComponentName
-) -> impl Iterator<Item = (TimeInt, RowId)> + '_
pub fn iter_component_indices( &self, - timeline: &Timeline, + timeline: &Timeline, component_name: &ComponentName -) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
+) -> impl Iterator<Item = (TimeInt, RowId)> + '_
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given
timeline and component.
If the chunk is static, timeline
will be ignored.
This is different than Self::iter_indices
in that it will only yield indices for rows
at which there is data for the specified component_name
.
See also Self::iter_indices
.
sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
.
sourcepub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_timepoints(&self) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the TimePoint
s of a Chunk
.
See also:
Self::iter_component_timepoints
.
@@ -219,7 +219,7 @@
sourcepub fn iter_component_timepoints(
&self,
component_name: &ComponentName
-) -> impl Iterator<Item = TimePoint> + '_
pub fn iter_component_timepoints( &self, component_name: &ComponentName -) -> impl Iterator<Item = TimePoint> + '_
Returns an iterator over the [TimePoint
]s of a Chunk
, for a given component.
Returns an iterator over the TimePoint
s of a Chunk
, for a given component.
This is different than Self::iter_timepoints
in that it will only yield timepoints for rows
at which there is data for the specified component_name
.
See also Self::iter_timepoints
.
source§impl Chunk
impl Chunk
sourcepub fn iter_indices_owned(
self: Arc<Self>,
- timeline: &Timeline
-) -> impl Iterator<Item = (TimeInt, RowId)>
pub fn iter_indices_owned( self: Arc<Self>, - timeline: &Timeline -) -> impl Iterator<Item = (TimeInt, RowId)>
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
Returns an iterator over the indices ((TimeInt, RowId)
) of a Chunk
, for a given timeline.
If the chunk is static, timeline
will be ignored.
The returned iterator outlives self
, thus it can be passed around freely.
The tradeoff is that self
must be an Arc
.
See also Self::is_sorted_uncached
.
sourcepub fn is_time_sorted(&self) -> bool
pub fn is_time_sorted(&self) -> bool
Is the chunk ascendingly sorted by time, for all of its timelines?
This is O(1) (cached).
-sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
pub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
+sourcepub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
pub fn is_timeline_sorted(&self, timeline: &Timeline) -> bool
Is the chunk ascendingly sorted by time, for a specific timeline?
This is O(1) (cached).
See also Self::is_timeline_sorted_uncached
.
sourcepub fn sort_if_unsorted(&mut self)
pub fn sort_if_unsorted(&mut self)
Sort the chunk, if needed.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
-sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
pub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
sourcepub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
pub fn sorted_by_timeline_if_unsorted(&self, timeline: &Timeline) -> Self
Returns a new Chunk
that is sorted by (<timeline>, RowId)
.
The underlying arrow data will be copied and shuffled in memory in order to make it contiguous.
This is a no-op if the underlying timeline is already sorted appropriately (happy path).
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
Chunk
being returned if the slice is completely OOB.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
pub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
sourcepub fn timeline_sliced(&self, timeline: Timeline) -> Self
pub fn timeline_sliced(&self, timeline: Timeline) -> Self
Slices the Chunk
horizontally by keeping only the selected timeline
.
The result is a new Chunk
with the same rows and (at-most) one timeline column.
All non-timeline columns will be kept as-is.
If timeline
is not found within the Chunk
, the end result will be the same as the
@@ -438,7 +438,7 @@
If component_name
is not found within the Chunk
, the end result will be the same as the
current chunk but without any component column.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
pub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
sourcepub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
pub fn timelines_sliced(&self, timelines_to_keep: &IntSet<Timeline>) -> Self
Slices the Chunk
horizontally by keeping only the selected timelines.
The result is a new Chunk
with the same rows and (at-most) the selected timeline columns.
All non-timeline columns will be kept as-is.
If none of the selected timelines exist in the Chunk
, the end result will be the same as the
@@ -464,7 +464,7 @@
The result is a new Chunk
with the same number of rows and the same index columns, but
no components.
WARNING: the returned chunk has the same old crate::ChunkId
! Change it with Self::with_id
.
sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
pub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
+sourcepub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
pub fn deduped_latest_on_index(&self, index: &Timeline) -> Self
Removes duplicate rows from sections of consecutive identical indices.
- If the
Chunk
is sorted on that index, the remaining values in the index column will be unique. - If the
Chunk
has been densified on a specific column, the resulting chunk will @@ -488,7 +488,7 @@WARNING: the returned chunk has the same old
crate::ChunkId
! Change it withSelf::with_id
.
source§impl Chunk
impl Chunk
sourcepub fn to_transport(&self) -> ChunkResult<TransportChunk>
pub fn to_transport(&self) -> ChunkResult<TransportChunk>
Prepare the Chunk
for transport.
It is probably a good idea to sort the chunk first.
-pub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
impl Chunk
pub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
pub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
impl PartialEq for Chunk
source§fn eq(&self, other: &Self) -> bool
fn eq(&self, other: &Self) -> bool
self
and other
values to be equal, and is used
+pub fn from_transport(transport: &TransportChunk) -> ChunkResult<Self>
source§impl Chunk
impl Chunk
pub fn from_arrow_msg(msg: &ArrowMsg) -> ChunkResult<Self>
pub fn to_arrow_msg(&self) -> ChunkResult<ArrowMsg>
Trait Implementations§
source§impl PartialEq for Chunk
impl PartialEq for Chunk
source§impl SizeBytes for Chunk
impl SizeBytes for Chunk
source§fn heap_size_bytes(&self) -> u64
fn heap_size_bytes(&self) -> u64
self
on the heap, in bytes.source§fn total_size_bytes(&self) -> u64
fn total_size_bytes(&self) -> u64
self
in bytes, accounting for both stack and heap space.source§fn stack_size_bytes(&self) -> u64
fn stack_size_bytes(&self) -> u64
self
on the stack, in bytes. Read moreAuto Trait Implementations§
impl !Freeze for Chunk
impl !RefUnwindSafe for Chunk
impl Send for Chunk
impl Sync for Chunk
impl Unpin for Chunk
impl !UnwindSafe for Chunk
Blanket Implementations§
source§impl<T> Az for T
impl<T> Az for T
source§fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
fn az<Dst>(self) -> Dstwhere
diff --git a/docs/rust/head/re_chunk/struct.ChunkBatcher.html b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
index a10ca8e81980e..24b3b9ea32c5a 100644
--- a/docs/rust/head/re_chunk/struct.ChunkBatcher.html
+++ b/docs/rust/head/re_chunk/struct.ChunkBatcher.html
@@ -32,7 +32,7 @@ §Shutdown
§Shutdown
Fields§
§inner: Arc<ChunkBatcherInner>
Implementations§
source§impl ChunkBatcher
impl ChunkBatcher
sourcepub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
pub fn new(config: ChunkBatcherConfig) -> ChunkBatcherResult<Self>
Creates a new ChunkBatcher
using the passed in config
.
The returned object must be kept in scope: dropping it will trigger a clean shutdown of the batcher.
-pub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
pub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
pub fn push_chunk(&self, chunk: Chunk)
sourcepub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
pub fn push_row(&self, entity_path: EntityPath, row: PendingRow)
Pushes a PendingRow
down the batching pipeline.
This will computea the size of the row from the batching thread!
See ChunkBatcher
docs for ordering semantics and multithreading guarantees.
sourcepub fn flush_async(&self)
pub fn flush_async(&self)
Initiates a flush of the pipeline and returns immediately.
diff --git a/docs/rust/head/re_chunk/struct.ChunkBuilder.html b/docs/rust/head/re_chunk/struct.ChunkBuilder.html index 46520c9999ae8..d466ec0cc3330 100644 --- a/docs/rust/head/re_chunk/struct.ChunkBuilder.html +++ b/docs/rust/head/re_chunk/struct.ChunkBuilder.html @@ -1,47 +1,47 @@Struct re_chunk::ChunkBuilder
source · pub struct ChunkBuilder {
id: ChunkId,
- entity_path: EntityPath,
+ entity_path: EntityPath,
row_ids: Vec<RowId>,
- timelines: BTreeMap<Timeline, TimeColumnBuilder>,
+ timelines: BTreeMap<Timeline, TimeColumnBuilder>,
components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>,
}
Expand description
Helper to incrementally build a Chunk
.
Can be created using Chunk::builder
.
Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
pub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
Fields§
§id: ChunkId
§entity_path: EntityPath
§row_ids: Vec<RowId>
§timelines: BTreeMap<Timeline, TimeColumnBuilder>
§components: BTreeMap<ComponentName, Vec<Option<Box<dyn ArrowArray>>>>
Implementations§
source§impl ChunkBuilder
impl ChunkBuilder
sourcepub fn new(id: ChunkId, entity_path: EntityPath) -> Self
pub fn new(id: ChunkId, entity_path: EntityPath) -> Self
Initializes a new ChunkBuilder
.
See also Chunk::builder
.
sourcepub fn with_sparse_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)>
) -> Self
pub fn with_sparse_row( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, components: impl IntoIterator<Item = (ComponentName, Option<Box<dyn ArrowArray>>)> ) -> Self
Add a row’s worth of data using the given sparse component data.
sourcepub fn with_row(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)>
) -> Self
pub fn with_row( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, components: impl IntoIterator<Item = (ComponentName, Box<dyn ArrowArray>)> ) -> Self
Add a row’s worth of data using the given component data.
sourcepub fn with_archetype(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
as_components: &dyn AsComponents
) -> Self
pub fn with_archetype( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, as_components: &dyn AsComponents ) -> Self
Add a row’s worth of data by destructuring an archetype into component columns.
sourcepub fn with_component_batch(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batch: &dyn ComponentBatch
) -> Self
pub fn with_component_batch( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, component_batch: &dyn ComponentBatch ) -> Self
Add a row’s worth of data by serializing a single ComponentBatch
.
sourcepub fn with_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch>
) -> Self
pub fn with_component_batches<'a>( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, component_batches: impl IntoIterator<Item = &'a dyn ComponentBatch> ) -> Self
Add a row’s worth of data by serializing many ComponentBatch
es.
sourcepub fn with_sparse_component_batches<'a>(
self,
row_id: RowId,
- timepoint: impl Into<TimePoint>,
+ timepoint: impl Into<TimePoint>,
component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)>
) -> Self
pub fn with_sparse_component_batches<'a>( self, row_id: RowId, - timepoint: impl Into<TimePoint>, + timepoint: impl Into<TimePoint>, component_batches: impl IntoIterator<Item = (ComponentName, Option<&'a dyn ComponentBatch>)> ) -> Self
Add a row’s worth of data by serializing many sparse ComponentBatch
es.
sourcepub fn build(self) -> ChunkResult<Chunk>
pub fn build(self) -> ChunkResult<Chunk>
Builds and returns the final Chunk
.
pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
pub struct ChunkId(pub(crate) Tuid);
Expand description
A unique ID for a crate::Chunk
.
Chunk
s are the atomic unit of ingestion, transport, storage, events and GC in Rerun.
Internally, a crate::Chunk
is made up of rows, which are themselves uniquely identified by
their RowId
.
§<
Garbage collection therefore happens (roughly) in the logger’s wall-clock order.
This has very important implications when inserting data far into the past or into the future:
think carefully about your RowId
s in these cases.
-
RowId
s in these cases.Tuple Fields§
§0: Tuid