From 737947754dfed23e649d980e2fc71b33ecb6e479 Mon Sep 17 00:00:00 2001 From: 317brian <53799971+317brian@users.noreply.github.com> Date: Fri, 27 Oct 2023 10:29:34 -0700 Subject: [PATCH] docs: add concurent compaction docs (#15218) Co-authored-by: Kashif Faraz --- docs/data-management/automatic-compaction.md | 102 ++++++++++- docs/data-management/compaction.md | 146 +--------------- docs/data-management/manual-compaction.md | 167 +++++++++++++++++++ docs/ingestion/ingestion-spec.md | 2 +- website/sidebars.json | 14 +- 5 files changed, 290 insertions(+), 141 deletions(-) create mode 100644 docs/data-management/manual-compaction.md diff --git a/docs/data-management/automatic-compaction.md b/docs/data-management/automatic-compaction.md index 8d696a86d4ef..4de4f1f3763b 100644 --- a/docs/data-management/automatic-compaction.md +++ b/docs/data-management/automatic-compaction.md @@ -162,7 +162,7 @@ To get statistics by API, send a [`GET` request](../api-reference/automatic-comp ## Examples -The following examples demonstrate potential use cases in which auto-compaction may improve your Druid performance. See more details in [Compaction strategies](../data-management/compaction.md#compaction-strategies). The examples in this section do not change the underlying data. +The following examples demonstrate potential use cases in which auto-compaction may improve your Druid performance. See more details in [Compaction strategies](../data-management/compaction.md#compaction-guidelines). The examples in this section do not change the underlying data. ### Change segment granularity @@ -203,6 +203,106 @@ The following auto-compaction configuration compacts updates the `wikipedia` seg } ``` +## Concurrent append and replace + +:::info +Concurrent append and replace is an [experimental feature](../development/experimental.md) and is not currently available for SQL-based ingestion. +::: + +This feature allows you to safely replace the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this is appending new data (using say streaming ingestion) to an interval while compaction of that interval is already in progress. + +To set up concurrent append and replace, you need to ensure that your ingestion jobs have the appropriate lock types: + +You can enable concurrent append and replace by ensuring the following: +- The append task (with `appendToExisting` set to `true`) has `taskLockType` set to `APPEND` in the task context. +- The replace task (with `appendToExisting` set to `false`) has `taskLockType` set to `REPLACE` in the task context. +- The segment granularity of the append task is equal to or finer than the segment granularity of the replace task. + +:::info + +When using concurrent append and replace, keep the following in mind: + +- Concurrent append and replace fails if the task with `APPEND` lock uses a coarser segment granularity than the task with the `REPLACE` lock. For example, if the `APPEND` task uses a segment granularity of YEAR and the `REPLACE` task uses a segment granularity of MONTH, you should not use concurrent append and replace. + +- Only a single task can hold a `REPLACE` lock on a given interval of a datasource. + +- Multiple tasks can hold `APPEND` locks on a given interval of a datasource and append data to that interval simultaneously. + +::: + + +### Configure concurrent append and replace + +##### Update the compaction settings with the API + + Prepare your datasource for concurrent append and replace by setting its task lock type to `REPLACE`. +Add the `taskContext` like you would any other automatic compaction setting through the API: + +```shell +curl --location --request POST 'http://localhost:8081/druid/coordinator/v1/config/compaction' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "dataSource": "YOUR_DATASOURCE", + "taskContext": { + "taskLockType": "REPLACE" + } +}' +``` + +##### Update the compaction settings with the UI + +In the **Compaction config** for a datasource, set **Allow concurrent compactions (experimental)** to **True**. + +#### Add a task lock type to your ingestion job + +Next, you need to configure the task lock type for your ingestion job: + +- For streaming jobs, the context parameter goes in your supervisor spec, and the lock type is always `APPEND` +- For legacy JSON-based batch ingestion, the context parameter goes in your ingestion spec, and the lock type can be either `APPEND` or `REPLACE`. + +You can provide the context parameter through the API like any other parameter for ingestion job or through the UI. + +##### Add the task lock type through the API + +Add the following JSON snippet to your supervisor or ingestion spec if you're using the API: + +```json +"context": { + "taskLockType": LOCK_TYPE +} +``` + +The `LOCK_TYPE` depends on what you're trying to accomplish. + +Set `taskLockType` to `APPEND` if either of the following are true: + +- Dynamic partitioning with append to existing is set to `true` +- The ingestion job is a streaming ingestion job + +If you have multiple ingestion jobs that append all targeting the same datasource and want them to run simultaneously, you need to also include the following context parameter: + +```json +"useSharedLock": "true" +``` + +Keep in mind that `taskLockType` takes precedence over `useSharedLock`. Do not use it with `REPLACE` task locks. + + +Set `taskLockType` to `REPLACE` if you're replacing data. For example, if you use any of the following partitioning types, use `REPLACE`: + +- hash partitioning +- range partitioning +- dynamic partitioning with append to existing set to `false` + + +##### Add a task lock using the Druid console + +As part of the **Load data** wizard for classic batch (JSON-based ingestion) and streaming ingestion, you can configure the task lock type for the ingestion during the **Publish** step: + +- If you set **Append to existing** to **True**, you can then set **Allow concurrent append tasks (experimental)** to **True**. +- If you set **Append to existing** to **False**, you can then set **Allow concurrent replace tasks (experimental)** to **True**. + + ## Learn more See the following topics for more information: diff --git a/docs/data-management/compaction.md b/docs/data-management/compaction.md index c166623e887d..b1daf275d9c3 100644 --- a/docs/data-management/compaction.md +++ b/docs/data-management/compaction.md @@ -22,9 +22,10 @@ description: "Defines compaction and automatic compaction (auto-compaction or au ~ specific language governing permissions and limitations ~ under the License. --> + Query performance in Apache Druid depends on optimally sized segments. Compaction is one strategy you can use to optimize segment size for your Druid database. Compaction tasks read an existing set of segments for a given time interval and combine the data into a new "compacted" set of segments. In some cases the compacted segments are larger, but there are fewer of them. In other cases the compacted segments may be smaller. Compaction tends to increase performance because optimized segments require less per-segment processing and less memory overhead for ingestion and for querying paths. -## Compaction strategies +## Compaction guidelines There are several cases to consider compaction for segment optimization: @@ -43,18 +44,20 @@ By default, compaction does not modify the underlying data of the segments. Howe Compaction does not improve performance in all situations. For example, if you rewrite your data with each ingestion task, you don't need to use compaction. See [Segment optimization](../operations/segment-optimization.md) for additional guidance to determine if compaction will help in your environment. -## Types of compaction +## Ways to run compaction -You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction, for a datasource. Using its [segment search policy](../design/coordinator.md#segment-search-policy-in-automatic-compaction), the Coordinator periodically identifies segments for compaction starting from newest to oldest. When the Coordinator discovers segments that have not been compacted or segments that were compacted with a different or changed spec, it submits compaction tasks for the time interval covering those segments. +Automatic compaction, also called auto-compaction, works in most use cases and should be your first option. -Automatic compaction works in most use cases and should be your first option. To learn more, see [Automatic compaction](../data-management/automatic-compaction.md). +The Coordinator uses its [segment search policy](../design/coordinator.md#segment-search-policy-in-automatic-compaction) to periodically identify segments for compaction starting from newest to oldest. When the Coordinator discovers segments that have not been compacted or segments that were compacted with a different or changed spec, it submits compaction tasks for the time interval covering those segments. + +To learn more, see [Automatic compaction](../data-management/automatic-compaction.md). In cases where you require more control over compaction, you can manually submit compaction tasks. For example: - Automatic compaction is running into the limit of task slots available to it, so tasks are waiting for previous automatic compaction tasks to complete. Manual compaction can use all available task slots, therefore you can complete compaction more quickly by submitting more concurrent tasks for more intervals. - You want to force compaction for a specific time range or you want to compact data out of chronological order. -See [Setting up a manual compaction task](#setting-up-manual-compaction) for more about manual compaction tasks. +See [Setting up a manual compaction task](./manual-compaction.md#setting-up-manual-compaction) for more about manual compaction tasks. ## Data handling with compaction @@ -101,141 +104,10 @@ Druid only rolls up the output segment when `rollup` is set for all input segmen See [Roll-up](../ingestion/rollup.md) for more details. You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes). -## Setting up manual compaction - -To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax: - -```json -{ - "type": "compact", - "id": , - "dataSource": , - "ioConfig": , - "dimensionsSpec": , - "transformSpec": , - "metricsSpec": , - "tuningConfig": , - "granularitySpec": , - "context": -} -``` - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Set the value to `compact`.|Yes| -|`id`|Task ID|No| -|`dataSource`|Data source name to compact|Yes| -|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes| -|`dimensionsSpec`|When set, the compaction task uses the specified `dimensionsSpec` rather than generating one from existing segments. See [Compaction dimensionsSpec](#compaction-dimensions-spec) for details.|No| -|`transformSpec`|When set, the compaction task uses the specified `transformSpec` rather than using `null`. See [Compaction transformSpec](#compaction-transform-spec) for details.|No| -|`metricsSpec`|When set, the compaction task uses the specified `metricsSpec` rather than generating one from existing segments.|No| -|`segmentGranularity`|Deprecated. Use `granularitySpec`.|No| -|`tuningConfig`|[Tuning configuration](../ingestion/native-batch.md#tuningconfig) for parallel indexing. `awaitSegmentAvailabilityTimeoutMillis` value is not supported for compaction tasks. Leave this parameter at the default value, 0.|No| -|`granularitySpec`|When set, the compaction task uses the specified `granularitySpec` rather than generating one from existing segments. See [Compaction `granularitySpec`](#compaction-granularity-spec) for details.|No| -|`context`|[Task context](../ingestion/tasks.md#context)|No| - -:::info - Note: Use `granularitySpec` over `segmentGranularity` and only set one of these values. If you specify different values for these in the same compaction spec, the task fails. -::: - -To control the number of result segments per time chunk, you can set [`maxRowsPerSegment`](../ingestion/native-batch.md#partitionsspec) or [`numShards`](../ingestion/../ingestion/native-batch.md#tuningconfig). - -:::info - You can run multiple compaction tasks in parallel. For example, if you want to compact the data for a year, you are not limited to running a single task for the entire year. You can run 12 compaction tasks with month-long intervals. -::: - -A compaction task internally generates an `index` or `index_parallel` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [`druid` input source](../ingestion/input-sources.md), and `dimensionsSpec` and `metricsSpec` include all dimensions and metrics of the input segments by default. - -Compaction tasks typically fetch all [relevant segments](#compaction-io-configuration) prior to launching any subtasks, _unless_ the following properties are all set to non-null values. It is strongly recommended to set them to non-null values to maximize performance and minimize disk usage of the `compact` task: - -- [`granularitySpec`](#compaction-granularity-spec), with non-null values for each of `segmentGranularity`, `queryGranularity`, and `rollup` -- [`dimensionsSpec`](#compaction-dimensions-spec) -- `metricsSpec` - -Compaction tasks exit without doing anything and issue a failure status code in either of the following cases: - -- If the interval you specify has no data segments loaded. -- If the interval you specify is empty. - -Note that the metadata between input segments and the resulting compacted segments may differ if the metadata among the input segments differs as well. If all input segments have the same metadata, however, the resulting output segment will have the same metadata as all input segments. - - -### Example compaction task - -The following JSON illustrates a compaction task to compact _all segments_ within the interval `2020-01-01/2021-01-01` and create new segments: - -```json -{ - "type": "compact", - "dataSource": "wikipedia", - "ioConfig": { - "type": "compact", - "inputSpec": { - "type": "interval", - "interval": "2020-01-01/2021-01-01" - } - }, - "granularitySpec": { - "segmentGranularity": "day", - "queryGranularity": "hour" - } -} -``` - -`granularitySpec` is an optional field. -If you don't specify `granularitySpec`, Druid retains the original segment and query granularities when compaction is complete. - -### Compaction I/O configuration - -The compaction `ioConfig` requires specifying `inputSpec` as follows: - -|Field|Description|Default|Required| -|-----|-----------|-------|--------| -|`type`|Task type. Set the value to `compact`.|none|Yes| -|`inputSpec`|Specification of the target [interval](#interval-inputspec) or [segments](#segments-inputspec).|none|Yes| -|`dropExisting`|If `true`, the task replaces all existing segments fully contained by either of the following:
- the `interval` in the `interval` type `inputSpec`.
- the umbrella interval of the `segments` in the `segment` type `inputSpec`.
If compaction fails, Druid does not change any of the existing segments.
**WARNING**: `dropExisting` in `ioConfig` is a beta feature. |false|No| -|`allowNonAlignedInterval`|If `true`, the task allows an explicit [`segmentGranularity`](#compaction-granularity-spec) that is not aligned with the provided [interval](#interval-inputspec) or [segments](#segments-inputspec). This parameter is only used if [`segmentGranularity`](#compaction-granularity-spec) is explicitly provided.

This parameter is provided for backwards compatibility. In most scenarios it should not be set, as it can lead to data being accidentally overshadowed. This parameter may be removed in a future release.|false|No| - -The compaction task has two kinds of `inputSpec`: - -#### Interval `inputSpec` - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Set the value to `interval`.|Yes| -|`interval`|Interval to compact.|Yes| - -#### Segments `inputSpec` - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Set the value to `segments`.|Yes| -|`segments`|A list of segment IDs.|Yes| - -### Compaction dimensions spec - -|Field|Description|Required| -|-----|-----------|--------| -|`dimensions`| A list of dimension names or objects. Cannot have the same column in both `dimensions` and `dimensionExclusions`. Defaults to `null`, which preserves the original dimensions.|No| -|`dimensionExclusions`| The names of dimensions to exclude from compaction. Only names are supported here, not objects. This list is only used if the dimensions list is null or empty; otherwise it is ignored. Defaults to `[]`.|No| - -### Compaction transform spec - -|Field|Description|Required| -|-----|-----------|--------| -|`filter`| The `filter` conditionally filters input rows during compaction. Only rows that pass the filter will be included in the compacted segments. Any of Druid's standard [query filters](../querying/filters.md) can be used. Defaults to 'null', which will not filter any row. |No| - -### Compaction granularity spec - -|Field|Description|Required| -|-----|-----------|--------| -|`segmentGranularity`|Time chunking period for the segment granularity. Defaults to 'null', which preserves the original segment granularity. Accepts all [Query granularity](../querying/granularities.md) values.|No| -|`queryGranularity`|The resolution of timestamp storage within each segment. Defaults to 'null', which preserves the original query granularity. Accepts all [Query granularity](../querying/granularities.md) values.|No| -|`rollup`|Enables compaction-time rollup. To preserve the original setting, keep the default value. To enable compaction-time rollup, set the value to `true`. Once the data is rolled up, you can no longer recover individual records.|No| - ## Learn more See the following topics for more information: - [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case. +- [Manual compaction](./manual-compaction.md) for how to run a one-time compaction task - [Automatic compaction](automatic-compaction.md) for how to enable and configure automatic compaction. diff --git a/docs/data-management/manual-compaction.md b/docs/data-management/manual-compaction.md new file mode 100644 index 000000000000..a2cd61b36b23 --- /dev/null +++ b/docs/data-management/manual-compaction.md @@ -0,0 +1,167 @@ +--- +id: manual-compaction +title: "Manual compaction" +--- + + + +In Apache Druid, compaction is a special type of ingestion task that reads data from a Druid datasource and writes it back into the same datasource. A common use case for this is to [optimally size segments](../operations/segment-optimization.md) after ingestion to improve query performance. + +You can perform manual compaction where you submit a one-time compaction task for a specific interval. Generally, you don't need to do this if you use [automatic compaction](./automatic-compaction.md), which is recommended for most workloads. + +## Setting up manual compaction + + Compaction tasks merge all segments for the defined interval according to the following syntax: + +```json +{ + "type": "compact", + "id": , + "dataSource": , + "ioConfig": , + "dimensionsSpec": , + "transformSpec": , + "metricsSpec": , + "tuningConfig": , + "granularitySpec": , + "context": +} +``` + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Set the value to `compact`.|Yes| +|`id`|Task ID|No| +|`dataSource`|Data source name to compact|Yes| +|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes| +|`dimensionsSpec`|When set, the compaction task uses the specified `dimensionsSpec` rather than generating one from existing segments. See [Compaction dimensionsSpec](#compaction-dimensions-spec) for details.|No| +|`transformSpec`|When set, the compaction task uses the specified `transformSpec` rather than using `null`. See [Compaction transformSpec](#compaction-transform-spec) for details.|No| +|`metricsSpec`|When set, the compaction task uses the specified `metricsSpec` rather than generating one from existing segments.|No| +|`segmentGranularity`|Deprecated. Use `granularitySpec`.|No| +|`tuningConfig`|[Tuning configuration](../ingestion/native-batch.md#tuningconfig) for parallel indexing. `awaitSegmentAvailabilityTimeoutMillis` value is not supported for compaction tasks. Leave this parameter at the default value, 0.|No| +|`granularitySpec`|When set, the compaction task uses the specified `granularitySpec` rather than generating one from existing segments. See [Compaction `granularitySpec`](#compaction-granularity-spec) for details.|No| +|`context`|[Task context](../ingestion/tasks.md#context)|No| + +:::info + Note: Use `granularitySpec` over `segmentGranularity` and only set one of these values. If you specify different values for these in the same compaction spec, the task fails. +::: + +To control the number of result segments per time chunk, you can set [`maxRowsPerSegment`](../ingestion/native-batch.md#partitionsspec) or [`numShards`](../ingestion/native-batch.md#tuningconfig). + +:::info + You can run multiple compaction tasks in parallel. For example, if you want to compact the data for a year, you are not limited to running a single task for the entire year. You can run 12 compaction tasks with month-long intervals. +::: + +A compaction task internally generates an `index` or `index_parallel` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [`druid` input source](../ingestion/input-sources.md), and `dimensionsSpec` and `metricsSpec` include all dimensions and metrics of the input segments by default. + +Compaction tasks typically fetch all [relevant segments](#compaction-io-configuration) prior to launching any subtasks, _unless_ the following properties are all set to non-null values. It is strongly recommended to set them to non-null values to maximize performance and minimize disk usage of the `compact` task: + +- [`granularitySpec`](#compaction-granularity-spec), with non-null values for each of `segmentGranularity`, `queryGranularity`, and `rollup` +- [`dimensionsSpec`](#compaction-dimensions-spec) +- `metricsSpec` + +Compaction tasks exit without doing anything and issue a failure status code in either of the following cases: + +- If the interval you specify has no data segments loaded. +- If the interval you specify is empty. + +Note that the metadata between input segments and the resulting compacted segments may differ if the metadata among the input segments differs as well. If all input segments have the same metadata, however, the resulting output segment will have the same metadata as all input segments. + + +## Manual compaction task example + +The following JSON illustrates a compaction task to compact _all segments_ within the interval `2020-01-01/2021-01-01` and create new segments: + +```json +{ + "type": "compact", + "dataSource": "wikipedia", + "ioConfig": { + "type": "compact", + "inputSpec": { + "type": "interval", + "interval": "2020-01-01/2021-01-01" + } + }, + "granularitySpec": { + "segmentGranularity": "day", + "queryGranularity": "hour" + } +} +``` + +`granularitySpec` is an optional field. +If you don't specify `granularitySpec`, Druid retains the original segment and query granularities when compaction is complete. + +## Compaction I/O configuration + +The compaction `ioConfig` requires specifying `inputSpec` as follows: + +|Field|Description|Default|Required| +|-----|-----------|-------|--------| +|`type`|Task type. Set the value to `compact`.|none|Yes| +|`inputSpec`|Specification of the target [interval](#interval-inputspec) or [segments](#segments-inputspec).|none|Yes| +|`dropExisting`|If `true`, the task replaces all existing segments fully contained by either of the following:
- the `interval` in the `interval` type `inputSpec`.
- the umbrella interval of the `segments` in the `segment` type `inputSpec`.
If compaction fails, Druid does not change any of the existing segments.
**WARNING**: `dropExisting` in `ioConfig` is a beta feature. |false|No| +|`allowNonAlignedInterval`|If `true`, the task allows an explicit [`segmentGranularity`](#compaction-granularity-spec) that is not aligned with the provided [interval](#interval-inputspec) or [segments](#segments-inputspec). This parameter is only used if [`segmentGranularity`](#compaction-granularity-spec) is explicitly provided.

This parameter is provided for backwards compatibility. In most scenarios it should not be set, as it can lead to data being accidentally overshadowed. This parameter may be removed in a future release.|false|No| + +The compaction task has two kinds of `inputSpec`: + +### Interval `inputSpec` + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Set the value to `interval`.|Yes| +|`interval`|Interval to compact.|Yes| + +### Segments `inputSpec` + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Set the value to `segments`.|Yes| +|`segments`|A list of segment IDs.|Yes| + +## Compaction dimensions spec + +|Field|Description|Required| +|-----|-----------|--------| +|`dimensions`| A list of dimension names or objects. Cannot have the same column in both `dimensions` and `dimensionExclusions`. Defaults to `null`, which preserves the original dimensions.|No| +|`dimensionExclusions`| The names of dimensions to exclude from compaction. Only names are supported here, not objects. This list is only used if the dimensions list is null or empty; otherwise it is ignored. Defaults to `[]`.|No| + +## Compaction transform spec + +|Field|Description|Required| +|-----|-----------|--------| +|`filter`| The `filter` conditionally filters input rows during compaction. Only rows that pass the filter will be included in the compacted segments. Any of Druid's standard [query filters](../querying/filters.md) can be used. Defaults to 'null', which will not filter any row. |No| + +## Compaction granularity spec + +|Field|Description|Required| +|-----|-----------|--------| +|`segmentGranularity`|Time chunking period for the segment granularity. Defaults to 'null', which preserves the original segment granularity. Accepts all [Query granularity](../querying/granularities.md) values.|No| +|`queryGranularity`|The resolution of timestamp storage within each segment. Defaults to 'null', which preserves the original query granularity. Accepts all [Query granularity](../querying/granularities.md) values.|No| +|`rollup`|Enables compaction-time rollup. To preserve the original setting, keep the default value. To enable compaction-time rollup, set the value to `true`. Once the data is rolled up, you can no longer recover individual records.|No| + +## Learn more + +See the following topics for more information: +* [Compaction](compaction.md) for an overview of compaction and how to set up manual compaction in Druid. +* [Segment optimization](../operations/segment-optimization.md) for guidance on evaluating and optimizing Druid segment size. +* [Coordinator process](../design/coordinator.md#automatic-compaction) for details on how the Coordinator plans compaction tasks. + diff --git a/docs/ingestion/ingestion-spec.md b/docs/ingestion/ingestion-spec.md index bc02faf20061..017b4f38bec5 100644 --- a/docs/ingestion/ingestion-spec.md +++ b/docs/ingestion/ingestion-spec.md @@ -529,4 +529,4 @@ You can enable front coding with all types of ingestion. For information on defi ::: Beyond these properties, each ingestion method has its own specific tuning properties. See the documentation for each -[ingestion method](./index.md#ingestion-methods) for details. +[ingestion method](./index.md#ingestion-methods) for details. \ No newline at end of file diff --git a/website/sidebars.json b/website/sidebars.json index 1062b3dfee97..a38292bfafef 100644 --- a/website/sidebars.json +++ b/website/sidebars.json @@ -90,8 +90,18 @@ "data-management/update", "data-management/delete", "data-management/schema-changes", - "data-management/compaction", - "data-management/automatic-compaction" + { + "type": "category", + "label": "Compaction", + "link": { + "type": "doc", + "id": "data-management/compaction" + }, + "items": [ + "data-management/automatic-compaction", + "data-management/manual-compaction" + ] + } ], "Querying": [ {