diff --git a/docs/content/getting-started/data-out.md b/docs/content/getting-started/data-out.md index 9a3de7657ce3..3e98942e87eb 100644 --- a/docs/content/getting-started/data-out.md +++ b/docs/content/getting-started/data-out.md @@ -3,14 +3,14 @@ title: Get data out of Rerun order: 450 --- -At its core, Rerun is a database. The viewer includes the [dataframe view](../reference/types/views/dataframe_view) to explore data in tabular form, and the SDK includes an API to export the data as dataframes from the recording. These features can be used, for example, to perform analysis on the data and log back the results to the original recording. +At its core, Rerun is a database. The viewer includes the [dataframe view](../reference/types/views/dataframe_view) to explore data in tabular form, and the SDK includes an API to export the data as dataframes from the recording. These features can be used, for example, to perform analysis on the data and send back the results to the original recording. In this three-part guide, we explore such a workflow by implementing an "open jaw detector" on top of our [face tracking example](https://rerun.io/examples/video-image/face_tracking). This process is split into three steps: 1. [Explore a recording with the dataframe view](data-out/explore-as-dataframe) 2. [Export the dataframe](data-out/export-dataframe) -3. [Analyze the data and log the results](data-out/analyze-and-log) +3. [Analyze the data and send back the results](data-out/analyze-and-send) Note: this guide uses the popular [Pandas](https://pandas.pydata.org) dataframe package. The same concept however applies in the same way for alternative dataframe packages such as [Polars](https://pola.rs). -If you just want to see the final result, jump to the [complete script](data-out/analyze-and-log.md#complete-script) at the end of the third section. +If you just want to see the final result, jump to the [complete script](data-out/analyze-and-send.md#complete-script) at the end of the third section. diff --git a/docs/content/getting-started/data-out/analyze-and-log.md b/docs/content/getting-started/data-out/analyze-and-send.md similarity index 87% rename from docs/content/getting-started/data-out/analyze-and-log.md rename to docs/content/getting-started/data-out/analyze-and-send.md index 1e5c178d31dd..86208077ecc8 100644 --- a/docs/content/getting-started/data-out/analyze-and-log.md +++ b/docs/content/getting-started/data-out/analyze-and-send.md @@ -1,11 +1,11 @@ --- -title: Analyze the data and log the results +title: Analyze the data and send back the results order: 3 --- -In the previous sections, we explored our data and exported it to a Pandas dataframe. In this section, we will analyze the data to extract a "jaw open state" signal and log it back to the viewer. +In the previous sections, we explored our data and exported it to a Pandas dataframe. In this section, we will analyze the data to extract a "jaw open state" signal and send it back to the viewer. @@ -20,7 +20,7 @@ df["jawOpenState"] = df["jawOpen"] > 0.15 ``` -## Log the result back to the viewer +## Send the result back to the viewer The first step is to initialize the logging SDK targeting the same recording we just analyzed. This requires matching both the application ID and recording ID precisely. @@ -37,11 +37,11 @@ rr.connect() _Note_: When automating data analysis, it is typically preferable to log the results to an distinct RRD file next to the source RRD (using `rr.save()`). In such a situation, it is also valid to use the same app ID and recording ID. This allows opening both the source and result RRDs in the viewer, which will display data from both files under the same recording. -We will log our jaw open state data in two forms: +We will send our jaw open state data in two forms: 1. As a standalone `Scalar` component, to hold the raw data. 2. As a `Text` component on the existing bounding box entity, such that we obtain a textual representation of the state in the visualization. -Here is how to log the data as a scalar: +Here is how to send the data as a scalar: ```python rr.send_columns( @@ -53,9 +53,9 @@ rr.send_columns( ) ``` -With use the [`rr.send_column()`](../../howto/send_columns.md) API to efficiently send the entire column of data in a single batch. +We use the [`rr.send_column()`](../../howto/send_columns.md) API to efficiently send the entire column of data in a single batch. -Next, let's log the same data as `Text` component: +Next, let's send the same data as `Text` component: ```python target_entity = "/video/detector/faces/0/bbox" @@ -84,6 +84,6 @@ The OPEN/CLOSE label is displayed along the bounding box on the 2D view, and the ### Complete script -Here is the complete script used by this guide to load data, analyze it, and log the result back: +Here is the complete script used by this guide to load data, analyze it, and send the result back: snippet: tutorials/data_out