From b4d83a86c20f55f85509d2a1f53925b6715a6b82 Mon Sep 17 00:00:00 2001 From: Jill Osborne Date: Thu, 5 Sep 2024 18:25:30 +0100 Subject: [PATCH] Middle Manager wording update in docs (#17005) --- docs/api-reference/service-status-api.md | 24 ++--- docs/configuration/index.md | 94 +++++++++---------- docs/design/architecture.md | 28 +++--- docs/design/indexer.md | 8 +- docs/design/indexing-service.md | 4 +- docs/design/metadata-storage.md | 2 +- docs/design/middlemanager.md | 14 +-- docs/design/overlord.md | 10 +- docs/design/peons.md | 8 +- docs/design/storage.md | 4 +- docs/design/zookeeper.md | 2 +- .../extensions-contrib/gce-extensions.md | 4 +- .../extensions-contrib/prometheus.md | 2 +- docs/development/extensions-core/s3.md | 2 +- docs/ingestion/hadoop.md | 8 +- docs/ingestion/index.md | 2 +- docs/ingestion/input-sources.md | 2 +- docs/ingestion/kafka-ingestion.md | 4 +- docs/ingestion/kinesis-ingestion.md | 4 +- docs/ingestion/native-batch.md | 10 +- docs/ingestion/streaming.md | 2 +- docs/ingestion/supervisor.md | 2 +- docs/ingestion/tasks.md | 12 +-- docs/operations/basic-cluster-tuning.md | 28 +++--- docs/operations/other-hadoop.md | 2 +- docs/operations/rolling-updates.md | 10 +- docs/querying/groupbyquery.md | 2 +- docs/querying/query-execution.md | 2 +- docs/querying/query-processing.md | 4 +- docs/release-info/upgrade-notes.md | 2 +- docs/tutorials/cluster.md | 18 ++-- 31 files changed, 160 insertions(+), 160 deletions(-) diff --git a/docs/api-reference/service-status-api.md b/docs/api-reference/service-status-api.md index 59e2b19c3fb8..efa5ebe01c94 100644 --- a/docs/api-reference/service-status-api.md +++ b/docs/api-reference/service-status-api.md @@ -45,7 +45,7 @@ You can use each endpoint with the ports for each type of service. The following | Router|8888| | Broker|8082| | Historical|8083| -| MiddleManager|8091| +| Middle Manager|8091| ### Get service information @@ -791,11 +791,11 @@ Host: http://OVERLORD_IP:OVERLORD_PORT -## MiddleManager +## Middle Manager -### Get MiddleManager state status +### Get Middle Manager state status -Retrieves the enabled state of the MiddleManager. Returns JSON object keyed by the combined `druid.host` and `druid.port` with a boolean `true` or `false` state as the value. +Retrieves the enabled state of the Middle Manager process. Returns JSON object keyed by the combined `druid.host` and `druid.port` with a boolean `true` or `false` state as the value. #### URL @@ -810,7 +810,7 @@ Retrieves the enabled state of the MiddleManager. Returns JSON object keyed by t
-*Successfully retrieved MiddleManager state* +*Successfully retrieved Middle Manager state* @@ -855,7 +855,7 @@ Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT ### Get active tasks -Retrieves a list of active tasks being run on MiddleManager. Returns JSON list of task ID strings. Note that for normal usage, you should use the `/druid/indexer/v1/tasks` [Tasks API](./tasks-api.md) endpoint or one of the task state specific variants instead. +Retrieves a list of active tasks being run on the Middle Manager. Returns JSON list of task ID strings. Note that for normal usage, you should use the `/druid/indexer/v1/tasks` [Tasks API](./tasks-api.md) endpoint or one of the task state specific variants instead. #### URL @@ -984,9 +984,9 @@ Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT -### Disable MiddleManager +### Disable Middle Manager -Disables a MiddleManager, causing it to stop accepting new tasks but complete all existing tasks. Returns a JSON object +Disables a Middle Manager, causing it to stop accepting new tasks but complete all existing tasks. Returns a JSON object keyed by the combined `druid.host` and `druid.port`. #### URL @@ -1002,7 +1002,7 @@ keyed by the combined `druid.host` and `druid.port`.
-*Successfully disabled MiddleManager* +*Successfully disabled Middle Manager* @@ -1043,9 +1043,9 @@ Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT -### Enable MiddleManager +### Enable Middle Manager -Enables a MiddleManager, allowing it to accept new tasks again if it was previously disabled. Returns a JSON object keyed by the combined `druid.host` and `druid.port`. +Enables a Middle Manager, allowing it to accept new tasks again if it was previously disabled. Returns a JSON object keyed by the combined `druid.host` and `druid.port`. #### URL @@ -1060,7 +1060,7 @@ Enables a MiddleManager, allowing it to accept new tasks again if it was previou
-*Successfully enabled MiddleManager* +*Successfully enabled Middle Manager* diff --git a/docs/configuration/index.md b/docs/configuration/index.md index 9c83946eb823..64797c402c08 100644 --- a/docs/configuration/index.md +++ b/docs/configuration/index.md @@ -166,8 +166,8 @@ The indexing service also uses its own set of paths. These configs can be includ |Property|Description|Default| |--------|-----------|-------| |`druid.zk.paths.indexer.base`|Base ZooKeeper path for |`${druid.zk.paths.base}/indexer`| -|`druid.zk.paths.indexer.announcementsPath`|MiddleManagers announce themselves here.|`${druid.zk.paths.indexer.base}/announcements`| -|`druid.zk.paths.indexer.tasksPath`|Used to assign tasks to MiddleManagers.|`${druid.zk.paths.indexer.base}/tasks`| +|`druid.zk.paths.indexer.announcementsPath`|Middle Managers announce themselves here.|`${druid.zk.paths.indexer.base}/announcements`| +|`druid.zk.paths.indexer.tasksPath`|Used to assign tasks to Middle Managers.|`${druid.zk.paths.indexer.base}/tasks`| |`druid.zk.paths.indexer.statusPath`|Parent path for announcement of task statuses.|`${druid.zk.paths.indexer.base}/status`| If `druid.zk.paths.base` and `druid.zk.paths.indexer.base` are both set, and none of the other `druid.zk.paths.*` or `druid.zk.paths.indexer.*` values are set, then the other properties will be evaluated relative to their respective `base`. @@ -403,7 +403,7 @@ Metric monitoring is an essential part of Druid operations. The following monito |`org.apache.druid.server.emitter.HttpEmittingMonitor`|Reports internal metrics of `http` or `parametrized` emitter (see below). Must not be used with another emitter type. See the description of the metrics here: .| |`org.apache.druid.server.metrics.TaskCountStatsMonitor`|Reports how many ingestion tasks are currently running/pending/waiting and also the number of successful/failed tasks per emission period.| |`org.apache.druid.server.metrics.TaskSlotCountStatsMonitor`|Reports metrics about task slot usage per emission period.| -|`org.apache.druid.server.metrics.WorkerTaskCountStatsMonitor`|Reports how many ingestion tasks are currently running/pending/waiting, the number of successful/failed tasks, and metrics about task slot usage for the reporting worker, per emission period. Only supported by MiddleManager node types.| +|`org.apache.druid.server.metrics.WorkerTaskCountStatsMonitor`|Reports how many ingestion tasks are currently running/pending/waiting, the number of successful/failed tasks, and metrics about task slot usage for the reporting worker, per emission period. Only supported by Middle Manager node types.| |`org.apache.druid.server.metrics.ServiceStatusMonitor`|Reports a heartbeat for the service.| For example, you might configure monitors on all services for system and JVM information within `common.runtime.properties` as follows: @@ -594,7 +594,7 @@ need arises. |Property|Description|Default|Required| |-----|-----------|-------|--------| |`druid.centralizedDatasourceSchema.enabled`|Boolean flag for enabling datasource schema building in the Coordinator, this should be specified in the common runtime properties.|false|No.| -|`druid.indexer.fork.property.druid.centralizedDatasourceSchema.enabled`| This config should be set when CentralizedDatasourceSchema feature is enabled. This should be specified in the MiddleManager runtime properties.|false|No.| +|`druid.indexer.fork.property.druid.centralizedDatasourceSchema.enabled`| This config should be set when CentralizedDatasourceSchema feature is enabled. This should be specified in the Middle Manager runtime properties.|false|No.| If you enable this feature, you can query datasources that are only stored in deep storage and are not loaded on a Historical. For more information, see [Query from deep storage](../querying/query-from-deep-storage.md). @@ -1132,17 +1132,17 @@ The following configs only apply if the Overlord is running in remote mode. For |Property|Description|Default| |--------|-----------|-------| -|`druid.indexer.runner.taskAssignmentTimeout`|How long to wait after a task has been assigned to a MiddleManager before throwing an error.|`PT5M`| -|`druid.indexer.runner.minWorkerVersion`|The minimum MiddleManager version to send tasks to. The version number is a string. This affects the expected behavior during certain operations like comparison against `druid.worker.version`. Specifically, the version comparison follows dictionary order. Use ISO8601 date format for the version to accommodate date comparisons. |"0"| +|`druid.indexer.runner.taskAssignmentTimeout`|How long to wait after a task has been assigned to a Middle Manager before throwing an error.|`PT5M`| +|`druid.indexer.runner.minWorkerVersion`|The minimum Middle Manager version to send tasks to. The version number is a string. This affects the expected behavior during certain operations like comparison against `druid.worker.version`. Specifically, the version comparison follows dictionary order. Use ISO8601 date format for the version to accommodate date comparisons. |"0"| | `druid.indexer.runner.parallelIndexTaskSlotRatio`| The ratio of task slots available for parallel indexing supervisor tasks per worker. The specified value must be in the range `[0, 1]`. |1| -|`druid.indexer.runner.compressZnodes`|Indicates whether or not the Overlord should expect MiddleManagers to compress Znodes.|true| +|`druid.indexer.runner.compressZnodes`|Indicates whether or not the Overlord should expect Middle Managers to compress Znodes.|true| |`druid.indexer.runner.maxZnodeBytes`|The maximum size Znode in bytes that can be created in ZooKeeper, should be in the range of `[10KiB, 2GiB)`. [Human-readable format](human-readable-byte.md) is supported.| 512 KiB | -|`druid.indexer.runner.taskCleanupTimeout`|How long to wait before failing a task after a MiddleManager is disconnected from ZooKeeper.|`PT15M`| -|`druid.indexer.runner.taskShutdownLinkTimeout`|How long to wait on a shutdown request to a MiddleManager before timing out|`PT1M`| +|`druid.indexer.runner.taskCleanupTimeout`|How long to wait before failing a task after a Middle Manager is disconnected from ZooKeeper.|`PT15M`| +|`druid.indexer.runner.taskShutdownLinkTimeout`|How long to wait on a shutdown request to a Middle Manager before timing out|`PT1M`| |`druid.indexer.runner.pendingTasksRunnerNumThreads`|Number of threads to allocate pending-tasks to workers, must be at least 1.|1| -|`druid.indexer.runner.maxRetriesBeforeBlacklist`|Number of consecutive times the MiddleManager can fail tasks, before the worker is blacklisted, must be at least 1|5| +|`druid.indexer.runner.maxRetriesBeforeBlacklist`|Number of consecutive times the Middle Manager can fail tasks, before the worker is blacklisted, must be at least 1|5| |`druid.indexer.runner.workerBlackListBackoffTime`|How long to wait before a task is whitelisted again. This value should be greater that the value set for taskBlackListCleanupPeriod.|`PT15M`| -|`druid.indexer.runner.workerBlackListCleanupPeriod`|A duration after which the cleanup thread will startup to clean blacklisted workers.|`PT5M`| +|`druid.indexer.runner.workerBlackListCleanupPeriod`|A duration after which the cleanup thread will start up to clean blacklisted workers.|`PT5M`| |`druid.indexer.runner.maxPercentageBlacklistWorkers`|The maximum percentage of workers to blacklist, this must be between 0 and 100.|20| If autoscaling is enabled, you can set these additional configs: @@ -1151,16 +1151,16 @@ If autoscaling is enabled, you can set these additional configs: |--------|-----------|-------| |`druid.indexer.autoscale.strategy`|Sets the strategy to run when autoscaling is required. One of `noop`, `ec2` or `gce`.|`noop`| |`druid.indexer.autoscale.doAutoscale`|If set to true, autoscaling will be enabled.|false| -|`druid.indexer.autoscale.provisionPeriod`|How often to check whether or not new MiddleManagers should be added.|`PT1M`| -|`druid.indexer.autoscale.terminatePeriod`|How often to check when MiddleManagers should be removed.|`PT5M`| +|`druid.indexer.autoscale.provisionPeriod`|How often to check whether or not new Middle Managers should be added.|`PT1M`| +|`druid.indexer.autoscale.terminatePeriod`|How often to check when Middle Managers should be removed.|`PT5M`| |`druid.indexer.autoscale.originTime`|The starting reference timestamp that the terminate period increments upon.|`2012-01-01T00:55:00.000Z`| |`druid.indexer.autoscale.workerIdleTimeout`|How long can a worker be idle (not a run task) before it can be considered for termination.|`PT90M`| -|`druid.indexer.autoscale.maxScalingDuration`|How long the Overlord will wait around for a MiddleManager to show up before giving up.|`PT15M`| +|`druid.indexer.autoscale.maxScalingDuration`|How long the Overlord will wait around for a Middle Manager to show up before giving up.|`PT15M`| |`druid.indexer.autoscale.numEventsToTrack`|The number of autoscaling related events (node creation and termination) to track.|10| |`druid.indexer.autoscale.pendingTaskTimeout`|How long a task can be in "pending" state before the Overlord tries to scale up.|`PT30S`| |`druid.indexer.autoscale.workerVersion`|If set, will only create nodes of set version during autoscaling. Overrides dynamic configuration. |null| -|`druid.indexer.autoscale.workerPort`|The port that MiddleManagers will run on.|8080| -|`druid.indexer.autoscale.workerCapacityHint`| An estimation of the number of task slots available for each worker launched by the auto scaler when there are no workers running. The auto scaler uses the worker capacity hint to launch workers with an adequate capacity to handle pending tasks. When unset or set to a value less than or equal to 0, the auto scaler scales workers equal to the value for `minNumWorkers` in autoScaler config instead. The auto scaler assumes that each worker, either a MiddleManager or indexer, has the same amount of task slots. Therefore, when all your workers have the same capacity (homogeneous capacity), set the value for `autoscale.workerCapacityHint` equal to `druid.worker.capacity`. If your workers have different capacities (heterogeneous capacity), set the value to the average of `druid.worker.capacity` across the workers. For example, if two workers have `druid.worker.capacity=10`, and one has `druid.worker.capacity=4`, set `autoscale.workerCapacityHint=8`. Only applies to `pendingTaskBased` provisioning strategy.|-1| +|`druid.indexer.autoscale.workerPort`|The port that Middle Managers will run on.|8080| +|`druid.indexer.autoscale.workerCapacityHint`| An estimation of the number of task slots available for each worker launched by the auto scaler when there are no workers running. The auto scaler uses the worker capacity hint to launch workers with an adequate capacity to handle pending tasks. When unset or set to a value less than or equal to 0, the auto scaler scales workers equal to the value for `minNumWorkers` in autoScaler config instead. The auto scaler assumes that each worker, either a Middle Manager or indexer, has the same amount of task slots. Therefore, when all your workers have the same capacity (homogeneous capacity), set the value for `autoscale.workerCapacityHint` equal to `druid.worker.capacity`. If your workers have different capacities (heterogeneous capacity), set the value to the average of `druid.worker.capacity` across the workers. For example, if two workers have `druid.worker.capacity=10`, and one has `druid.worker.capacity=4`, set `autoscale.workerCapacityHint=8`. Only applies to `pendingTaskBased` provisioning strategy.|-1| ##### Supervisors @@ -1186,7 +1186,7 @@ The following table shows the dynamic configuration properties for the Overlord. |Property|Description|Default| |--------|-----------|-------| -|`selectStrategy`| Describes how to assign tasks to MiddleManagers. The type can be `equalDistribution`, `equalDistributionWithCategorySpec`, `fillCapacity`, `fillCapacityWithCategorySpec`, and `javascript`. | `{"type":"equalDistribution"}` | +|`selectStrategy`| Describes how to assign tasks to Middle Managers. The type can be `equalDistribution`, `equalDistributionWithCategorySpec`, `fillCapacity`, `fillCapacityWithCategorySpec`, and `javascript`. | `{"type":"equalDistribution"}` | |`autoScaler`| Only used if [autoscaling](#autoscaler) is enabled.| null | The following is an example of an Overlord dynamic config: @@ -1233,7 +1233,7 @@ The following is an example of an Overlord dynamic config: ##### Worker select strategy -The select strategy controls how Druid assigns tasks to workers (MiddleManagers). +The select strategy controls how Druid assigns tasks to workers (Middle Managers). At a high level, the select strategy determines the list of eligible workers for a given task using either an `affinityConfig` or a `categorySpec`. Then, Druid assigns the task by either trying to distribute load equally (`equalDistribution`) or to fill as many workers as possible to capacity (`fillCapacity`). @@ -1270,8 +1270,8 @@ not be assigned a category, and you want the work to be concentrated on the fewe ###### `equalDistribution` -Tasks are assigned to the MiddleManager with the most free slots at the time the task begins running. -This evenly distributes work across your MiddleManagers. +Tasks are assigned to the Middle Manager with the most free slots at the time the task begins running. +This evenly distributes work across your Middle Managers. |Property|Description|Default| |--------|-----------|-------| @@ -1281,7 +1281,7 @@ This evenly distributes work across your MiddleManagers. ###### `equalDistributionWithCategorySpec` This strategy is a variant of `equalDistribution`, which supports `workerCategorySpec` field rather than `affinityConfig`. -By specifying `workerCategorySpec`, you can assign tasks to run on different categories of MiddleManagers based on the **type** and **dataSource** of the task. +By specifying `workerCategorySpec`, you can assign tasks to run on different categories of Middle Managers based on the **type** and **dataSource** of the task. This strategy doesn't work with `AutoScaler` since the behavior is undefined. |Property|Description|Default| @@ -1289,7 +1289,7 @@ This strategy doesn't work with `AutoScaler` since the behavior is undefined. |`type`|`equalDistributionWithCategorySpec`|required; must be `equalDistributionWithCategorySpec`| |`workerCategorySpec`|[`WorkerCategorySpec`](#workercategoryspec) object|null (no worker category spec)| -The following example shows tasks of type `index_kafka` that default to running on MiddleManagers of category `c1`, except for tasks that write to datasource `ds1`, which run on MiddleManagers of category `c2`. +The following example shows tasks of type `index_kafka` that default to running on Middle Managers of category `c1`, except for tasks that write to datasource `ds1`, which run on Middle Managers of category `c2`. ```json { @@ -1313,11 +1313,11 @@ The following example shows tasks of type `index_kafka` that default to running ###### `fillCapacity` Tasks are assigned to the worker with the most currently-running tasks. This is -useful when you are auto-scaling MiddleManagers since it tends to pack some full and +useful when you are auto-scaling Middle Managers since it tends to pack some full and leave others empty. The empty ones can be safely terminated. Note that if `druid.indexer.runner.pendingTasksRunnerNumThreads` is set to _N_ > 1, then this strategy will fill _N_ -MiddleManagers up to capacity simultaneously, rather than a single MiddleManager. +Middle Managers up to capacity simultaneously, rather than a single Middle Manager. |Property|Description|Default| |--------|-----------|-------| @@ -1370,8 +1370,8 @@ If not provided, the default is to have no affinity. |Property|Description|Default| |--------|-----------|-------| -|`affinity`|JSON object mapping a datasource String name to a list of indexing service MiddleManager `host:port` values. Druid doesn't perform DNS resolution, so the 'host' value must match what is configured on the MiddleManager and what the MiddleManager announces itself as (examine the Overlord logs to see what your MiddleManager announces itself as).|`{}`| -|`strong`|When `true` tasks for a datasource must be assigned to affinity-mapped MiddleManagers. Tasks remain queued until a slot becomes available. When `false`, Druid may assign tasks for a datasource to other MiddleManagers when affinity-mapped MiddleManagers are unavailable to run queued tasks.|false| +|`affinity`|JSON object mapping a datasource String name to a list of indexing service Middle Manager `host:port` values. Druid doesn't perform DNS resolution, so the 'host' value must match what is configured on the Middle Manager and what the Middle Manager announces itself as (examine the Overlord logs to see what your Middle Manager announces itself as).|`{}`| +|`strong`|When `true` tasks for a datasource must be assigned to affinity-mapped Middle Managers. Tasks remain queued until a slot becomes available. When `false`, Druid may assign tasks for a datasource to other Middle Managers when affinity-mapped Middle Managers are unavailable to run queued tasks.|false| ###### workerCategorySpec @@ -1381,14 +1381,14 @@ field. If not provided, the default is to not use it at all. |Property|Description|Default| |--------|-----------|-------| |`categoryMap`|A JSON map object mapping a task type String name to a [CategoryConfig](#categoryconfig) object, by which you can specify category config for different task type.|`{}`| -|`strong`|With weak workerCategorySpec (the default), tasks for a dataSource may be assigned to other MiddleManagers if the MiddleManagers specified in `categoryMap` are not able to run all pending tasks in the queue for that dataSource. With strong workerCategorySpec, tasks for a dataSource will only ever be assigned to their specified MiddleManagers, and will wait in the pending queue if necessary.|false| +|`strong`|With weak workerCategorySpec (the default), tasks for a dataSource may be assigned to other Middle Managers if the Middle Managers specified in `categoryMap` are not able to run all pending tasks in the queue for that dataSource. With strong workerCategorySpec, tasks for a dataSource will only ever be assigned to their specified Middle Managers, and will wait in the pending queue if necessary.|false| ###### CategoryConfig |Property|Description|Default| |--------|-----------|-------| |`defaultCategory`|Specify default category for a task type.|null| -|`categoryAffinity`|A JSON map object mapping a datasource String name to a category String name of the MiddleManager. If category isn't specified for a datasource, then using the `defaultCategory`. If no specified category and the `defaultCategory` is also null, then tasks can run on any available MiddleManagers.|null| +|`categoryAffinity`|A JSON map object mapping a datasource String name to a category String name of the Middle Manager. If category isn't specified for a datasource, then using the `defaultCategory`. If no specified category and the `defaultCategory` is also null, then tasks can run on any available Middle Managers.|null| ##### Autoscaler @@ -1409,15 +1409,15 @@ For GCE's properties, please refer to the [gce-extensions](../development/extens ## Data server -This section contains the configuration options for the services that reside on Data servers (MiddleManagers/Peons and Historicals) in the suggested [three-server configuration](../design/architecture.md#druid-servers). +This section contains the configuration options for the services that reside on Data servers (Middle Managers/Peons and Historicals) in the suggested [three-server configuration](../design/architecture.md#druid-servers). Configuration options for the [Indexer process](../design/indexer.md) are also provided here. -### MiddleManager and Peons +### Middle Manager and Peon -These MiddleManager and Peon configurations can be defined in the `middleManager/runtime.properties` file. +These Middle Manager and Peon configurations can be defined in the `middleManager/runtime.properties` file. -#### MiddleManager service config +#### Middle Manager service config |Property|Description|Default| |--------|-----------|-------| @@ -1427,14 +1427,14 @@ These MiddleManager and Peon configurations can be defined in the `middleManager |`druid.tlsPort`|TLS port for HTTPS connector, if [druid.enableTlsPort](../operations/tls-support.md) is set then this config will be used. If `druid.host` contains port then that port will be ignored. This should be a non-negative Integer.|8291| |`druid.service`|The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services|`druid/middlemanager`| -#### MiddleManager configuration +#### Middle Manager configuration -MiddleManagers pass their configurations down to their child peons. The MiddleManager requires the following configs: +Middle Managers pass their configurations down to their child peons. The Middle Manager requires the following configs: |Property|Description|Default| |--------|-----------|-------| |`druid.indexer.runner.allowedPrefixes`|Whitelist of prefixes for configs that can be passed down to child peons.|`com.metamx`, `druid`, `org.apache.druid`, `user.timezone`, `file.encoding`, `java.io.tmpdir`, `hadoop`| -|`druid.indexer.runner.compressZnodes`|Indicates whether or not the MiddleManagers should compress Znodes.|true| +|`druid.indexer.runner.compressZnodes`|Indicates whether or not the Middle Managers should compress Znodes.|true| |`druid.indexer.runner.classpath`|Java classpath for the peon.|`System.getProperty("java.class.path")`| |`druid.indexer.runner.javaCommand`|Command required to execute java.|java| |`druid.indexer.runner.javaOpts`|_DEPRECATED_ A string of -X Java options to pass to the peon's JVM. Quotable parameters or parameters with spaces are encouraged to use javaOptsArray|`''`| @@ -1444,16 +1444,16 @@ MiddleManagers pass their configurations down to their child peons. The MiddleMa |`druid.indexer.runner.endPort`|Ending port used for Peon services, should be greater than or equal to `druid.indexer.runner.startPort` and less than 65536.|65535| |`druid.indexer.runner.ports`|A JSON array of integers to specify ports that used for Peon services. If provided and non-empty, ports for Peon services will be chosen from these ports. And `druid.indexer.runner.startPort/druid.indexer.runner.endPort` will be completely ignored.|`[]`| |`druid.worker.ip`|The IP of the worker.|`localhost`| -|`druid.worker.version`|Version identifier for the MiddleManager. The version number is a string. This affects the expected behavior during certain operations like comparison against `druid.indexer.runner.minWorkerVersion`. Specifically, the version comparison follows dictionary order. Use ISO8601 date format for the version to accommodate date comparisons.|0| -|`druid.worker.capacity`|Maximum number of tasks the MiddleManager can accept.|Number of CPUs on the machine - 1| +|`druid.worker.version`|Version identifier for the Middle Manager. The version number is a string. This affects the expected behavior during certain operations like comparison against `druid.indexer.runner.minWorkerVersion`. Specifically, the version comparison follows dictionary order. Use ISO8601 date format for the version to accommodate date comparisons.|0| +|`druid.worker.capacity`|Maximum number of tasks the Middle Manager can accept.|Number of CPUs on the machine - 1| |`druid.worker.baseTaskDirs`|List of base temporary working directories, one of which is assigned per task in a round-robin fashion. This property can be used to allow usage of multiple disks for indexing. This property is recommended in place of and takes precedence over `${druid.indexer.task.baseTaskDir}`. If this configuration is not set, `${druid.indexer.task.baseTaskDir}` is used. For example, `druid.worker.baseTaskDirs=[\"PATH1\",\"PATH2\",...]`.|null| |`druid.worker.baseTaskDirSize`|The total amount of bytes that can be used by tasks on any single task dir. This value is treated symmetrically across all directories, that is, if this is 500 GB and there are 3 `baseTaskDirs`, then each of those task directories is assumed to allow for 500 GB to be used and a total of 1.5 TB will potentially be available across all tasks. The actual amount of memory assigned to each task is discussed in [Configuring task storage sizes](../ingestion/tasks.md#configuring-task-storage-sizes)|`Long.MAX_VALUE`| -|`druid.worker.category`|A string to name the category that the MiddleManager node belongs to.|`_default_worker_category`| +|`druid.worker.category`|A string to name the category that the Middle Manager node belongs to.|`_default_worker_category`| |`druid.indexer.fork.property.druid.centralizedDatasourceSchema.enabled`| This config should be set when [Centralized Datasource Schema](#centralized-datasource-schema) feature is enabled. |false| #### Peon processing -Processing properties set on the MiddleManager are passed through to Peons. +Processing properties set on the Middle Manager are passed through to Peons. |Property|Description|Default| |--------|-----------|-------| @@ -1464,7 +1464,7 @@ Processing properties set on the MiddleManager are passed through to Peons. |`druid.processing.numThreads`|The number of processing threads to have available for parallel processing of segments. Our rule of thumb is `num_cores - 1`, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value `1`.|Number of cores - 1 (or 1)| |`druid.processing.fifo`|Enables the processing queue to treat tasks of equal priority in a FIFO manner.|`true`| |`druid.processing.tmpDir`|Path where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default `java.io.tmpdir` path.|path represented by `java.io.tmpdir`| -|`druid.processing.intermediaryData.storage.type`|Storage type for intermediary segments of data shuffle between native parallel index tasks.
Set to `local` to store segment files in the local storage of the MiddleManager or Indexer.
Set to `deepstore` to use configured deep storage for better fault tolerance during rolling updates. When the storage type is `deepstore`, Druid stores the data in the `shuffle-data` directory under the configured deep storage path. Druid does not support automated cleanup for the `shuffle-data` directory. You can set up cloud storage lifecycle rules for automated cleanup of data at the `shuffle-data` prefix location.|`local`| +|`druid.processing.intermediaryData.storage.type`|Storage type for intermediary segments of data shuffle between native parallel index tasks.
Set to `local` to store segment files in the local storage of the Middle Manager or Indexer.
Set to `deepstore` to use configured deep storage for better fault tolerance during rolling updates. When the storage type is `deepstore`, Druid stores the data in the `shuffle-data` directory under the configured deep storage path. Druid does not support automated cleanup for the `shuffle-data` directory. You can set up cloud storage lifecycle rules for automated cleanup of data at the `shuffle-data` prefix location.|`local`| The amount of direct memory needed by Druid is at least `druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)`. You can @@ -1490,7 +1490,7 @@ See [cache configuration](#cache-configuration) for how to configure cache setti #### Additional Peon configuration -Although Peons inherit the configurations of their parent MiddleManagers, explicit child Peon configs in MiddleManager can be set by prefixing them with: +Although Peons inherit the configurations of their parent Middle Managers, explicit child Peon configs in Middle Manager can be set by prefixing them with: ```properties druid.indexer.fork.property @@ -1506,9 +1506,9 @@ Additional Peon configs include: |`druid.indexer.task.defaultHadoopCoordinates`|Hadoop version to use with HadoopIndexTasks that do not request a particular version.|`org.apache.hadoop:hadoop-client-api:3.3.6`, `org.apache.hadoop:hadoop-client-runtime:3.3.6`| |`druid.indexer.task.defaultRowFlushBoundary`|Highest row count before persisting to disk. Used for indexing generating tasks.|75000| |`druid.indexer.task.directoryLockTimeout`|Wait this long for zombie Peons to exit before giving up on their replacements.|PT10M| -|`druid.indexer.task.gracefulShutdownTimeout`|Wait this long on MiddleManager restart for restorable tasks to gracefully exit.|PT5M| +|`druid.indexer.task.gracefulShutdownTimeout`|Wait this long on Middle Manager restart for restorable tasks to gracefully exit.|PT5M| |`druid.indexer.task.hadoopWorkingPath`|Temporary working directory for Hadoop tasks.|`/tmp/druid-indexing`| -|`druid.indexer.task.restoreTasksOnRestart`|If true, MiddleManagers will attempt to stop tasks gracefully on shutdown and restore them on restart.|false| +|`druid.indexer.task.restoreTasksOnRestart`|If true, Middle Managers will attempt to stop tasks gracefully on shutdown and restore them on restart.|false| |`druid.indexer.task.ignoreTimestampSpecForDruidInputSource`|If true, tasks using the [Druid input source](../ingestion/input-sources.md) will ignore the provided timestampSpec, and will use the `__time` column of the input datasource. This option is provided for compatibility with ingestion specs written before Druid 0.22.0.|false| |`druid.indexer.task.storeEmptyColumns`|Boolean value for whether or not to store empty columns during ingestion. When set to true, Druid stores every column specified in the [`dimensionsSpec`](../ingestion/ingestion-spec.md#dimensionsspec). If you use the string-based schemaless ingestion and don't specify any dimensions to ingest, you must also set [`includeAllDimensions`](../ingestion/ingestion-spec.md#dimensionsspec) for Druid to store empty columns.

If you set `storeEmptyColumns` to false, Druid SQL queries referencing empty columns will fail. If you intend to leave `storeEmptyColumns` disabled, you should either ingest placeholder data for empty columns or else not query on empty columns.

You can overwrite this configuration by setting `storeEmptyColumns` in the [task context](../ingestion/tasks.md#context-parameters).|true| |`druid.indexer.task.tmpStorageBytesPerTask`|Maximum number of bytes per task to be used to store temporary files on disk. This config is generally intended for internal usage. Attempts to set it are very likely to be overwritten by the TaskRunner that executes the task, so be sure of what you expect to happen before directly adjusting this configuration parameter. The config is documented here primarily to provide an understanding of what it means if/when someone sees that it has been set. A value of -1 disables this limit. |-1| @@ -2007,9 +2007,9 @@ See [cache configuration](#cache-configuration) for how to configure cache setti ## Cache configuration -This section describes caching configuration that is common to Broker, Historical, and MiddleManager/Peon processes. +This section describes caching configuration that is common to Broker, Historical, and Middle Manager/Peon processes. -Caching could optionally be enabled on the Broker, Historical, and MiddleManager/Peon processes. See +Caching could optionally be enabled on the Broker, Historical, and Middle Manager/Peon processes. See [Broker](#broker-caching), [Historical](#historical-caching), and [Peon](#peon-caching) configuration options for how to enable it for different processes. @@ -2112,7 +2112,7 @@ If there is an L1 miss and L2 hit, it will also populate L1. ## General query configuration -This section describes configurations that control behavior of Druid's query types, applicable to Broker, Historical, and MiddleManager processes. +This section describes configurations that control behavior of Druid's query types, applicable to Broker, Historical, and Middle Manager processes. ### Overriding default query context values @@ -2163,7 +2163,7 @@ context). If query does have `maxQueuedBytes` in the context, then that value is ### GroupBy query config -This section describes the configurations for groupBy queries. You can set the runtime properties in the `runtime.properties` file on Broker, Historical, and MiddleManager processes. You can set the query context parameters through the [query context](../querying/query-context.md). +This section describes the configurations for groupBy queries. You can set the runtime properties in the `runtime.properties` file on Broker, Historical, and Middle Manager processes. You can set the query context parameters through the [query context](../querying/query-context.md). Supported runtime properties: diff --git a/docs/design/architecture.md b/docs/design/architecture.md index 188a0b441644..8887308acd05 100644 --- a/docs/design/architecture.md +++ b/docs/design/architecture.md @@ -40,8 +40,8 @@ Druid has several types of services: * [Broker](../design/broker.md) handles queries from external clients. * [Router](../design/router.md) routes requests to Brokers, Coordinators, and Overlords. * [Historical](../design/historical.md) stores queryable data. -* [MiddleManager](../design/middlemanager.md) and [Peon](../design/peons.md) ingest data. -* [Indexer](../design/indexer.md) serves an alternative to the MiddleManager + Peon task execution system. +* [Middle Manager](../design/middlemanager.md) and [Peon](../design/peons.md) ingest data. +* [Indexer](../design/indexer.md) serves an alternative to the Middle Manager + Peon task execution system. You can view services in the **Services** tab in the web console: @@ -63,7 +63,7 @@ Master servers divide operations between Coordinator and Overlord services. #### Overlord service -[Overlord](../design/overlord.md) services watch over the MiddleManager services on the Data servers and are the controllers of data ingestion into Druid. They are responsible for assigning ingestion tasks to MiddleManagers and for coordinating segment publishing. +[Overlord](../design/overlord.md) services watch over the Middle Manager services on the Data servers and are the controllers of data ingestion into Druid. They are responsible for assigning ingestion tasks to Middle Managers and for coordinating segment publishing. ### Query server @@ -73,7 +73,7 @@ Query servers divide operations between Broker and Router services. #### Broker service -[Broker](../design/broker.md) services receive queries from external clients and forward those queries to Data servers. When Brokers receive results from those subqueries, they merge those results and return them to the caller. Typically, you query Brokers rather than querying Historical or MiddleManager services on Data servers directly. +[Broker](../design/broker.md) services receive queries from external clients and forward those queries to Data servers. When Brokers receive results from those subqueries, they merge those results and return them to the caller. Typically, you query Brokers rather than querying Historical or Middle Manager services on Data servers directly. #### Router service @@ -85,30 +85,30 @@ The Router service also runs the [web console](../operations/web-console.md), a A Data server executes ingestion jobs and stores queryable data. -Data servers divide operations between Historical and MiddleManager services. +Data servers divide operations between Historical and Middle Manager services. #### Historical service [**Historical**](../design/historical.md) services handle storage and querying on historical data, including any streaming data that has been in the system long enough to be committed. Historical services download segments from deep storage and respond to queries about these segments. They don't accept writes. -#### MiddleManager service +#### Middle Manager service -[**MiddleManager**](../design/middlemanager.md) services handle ingestion of new data into the cluster. They are responsible +[**Middle Manager**](../design/middlemanager.md) services handle ingestion of new data into the cluster. They are responsible for reading from external data sources and publishing new Druid segments. ##### Peon service -[**Peon**](../design/peons.md) services are task execution engines spawned by MiddleManagers. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the MiddleManager that spawned them. +[**Peon**](../design/peons.md) services are task execution engines spawned by Middle Managers. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the Middle Manager that spawned them. #### Indexer service (optional) -[**Indexer**](../design/indexer.md) services are an alternative to MiddleManagers and Peons. Instead of +[**Indexer**](../design/indexer.md) services are an alternative to Middle Managers and Peons. Instead of forking separate JVM processes per-task, the Indexer runs tasks as individual threads within a single JVM process. -The Indexer is designed to be easier to configure and deploy compared to the MiddleManager + Peon system and to better enable resource sharing across tasks. The Indexer is a newer feature and is currently designated [experimental](../development/experimental.md) due to the fact that its memory management system is still under +The Indexer is designed to be easier to configure and deploy compared to the Middle Manager + Peon system and to better enable resource sharing across tasks. The Indexer is a newer feature and is currently designated [experimental](../development/experimental.md) due to the fact that its memory management system is still under development. It will continue to mature in future versions of Druid. -Typically, you would deploy either MiddleManagers or Indexers, but not both. +Typically, you would deploy either Middle Managers or Indexers, but not both. ## Colocation of services @@ -126,11 +126,11 @@ In clusters with very high segment counts, it can make sense to separate the Coo You can run the Coordinator and Overlord services as a single combined service by setting the `druid.coordinator.asOverlord.enabled` property. For more information, see [Coordinator Operation](../configuration/index.md#coordinator-operation). -### Historicals and MiddleManagers +### Historicals and Middle Managers -With higher levels of ingestion or query load, it can make sense to deploy the Historical and MiddleManager services on separate hosts to to avoid CPU and memory contention. +With higher levels of ingestion or query load, it can make sense to deploy the Historical and Middle Manager services on separate hosts to to avoid CPU and memory contention. -The Historical service also benefits from having free memory for memory mapped segments, which can be another reason to deploy the Historical and MiddleManager services separately. +The Historical service also benefits from having free memory for memory mapped segments, which can be another reason to deploy the Historical and Middle Manager services separately. ## External dependencies diff --git a/docs/design/indexer.md b/docs/design/indexer.md index ae9254b9cc20..9d606d1c9a4f 100644 --- a/docs/design/indexer.md +++ b/docs/design/indexer.md @@ -28,9 +28,9 @@ sidebar_label: "Indexer" Its memory management system is still under development and will be significantly enhanced in later releases. ::: -The Apache Druid Indexer service is an alternative to the MiddleManager + Peon task execution system. Instead of forking a separate JVM process per-task, the Indexer runs tasks as separate threads within a single JVM process. +The Apache Druid Indexer service is an alternative to the Middle Manager + Peon task execution system. Instead of forking a separate JVM process per-task, the Indexer runs tasks as separate threads within a single JVM process. -The Indexer is designed to be easier to configure and deploy compared to the MiddleManager + Peon system and to better enable resource sharing across tasks. +The Indexer is designed to be easier to configure and deploy compared to the Middle Manager + Peon system and to better enable resource sharing across tasks. ## Configuration @@ -38,7 +38,7 @@ For Apache Druid Indexer service configuration, see [Indexer Configuration](../c ## HTTP endpoints -The Indexer service shares the same HTTP endpoints as the [MiddleManager](../api-reference/service-status-api.md#middlemanager). +The Indexer service shares the same HTTP endpoints as the [Middle Manager](../api-reference/service-status-api.md#middle-manager). ## Running @@ -73,7 +73,7 @@ This global limit is evenly divided across the number of task slots configured b To apply the per-task heap limit, the Indexer overrides `maxBytesInMemory` in task tuning configurations, that is ignoring the default value or any user configured value. It also overrides `maxRowsInMemory` to an essentially unlimited value: the Indexer does not support row limits. -By default, `druid.worker.globalIngestionHeapLimitBytes` is set to 1/6th of the available JVM heap. This default is chosen to align with the default value of `maxBytesInMemory` in task tuning configs when using the MiddleManager + Peon system, which is also 1/6th of the JVM heap. +By default, `druid.worker.globalIngestionHeapLimitBytes` is set to 1/6th of the available JVM heap. This default is chosen to align with the default value of `maxBytesInMemory` in task tuning configs when using the Middle Manager + Peon system, which is also 1/6th of the JVM heap. The peak usage for rows held in heap memory relates to the interaction between the `maxBytesInMemory` and `maxPendingPersists` properties in the task tuning configs. When the amount of row data held in-heap by a task reaches the limit specified by `maxBytesInMemory`, a task will persist the in-heap row data. After the persist has been started, the task can again ingest up to `maxBytesInMemory` bytes worth of row data while the persist is running. diff --git a/docs/design/indexing-service.md b/docs/design/indexing-service.md index 537faa9a405d..d7dde33ecdf4 100644 --- a/docs/design/indexing-service.md +++ b/docs/design/indexing-service.md @@ -27,8 +27,8 @@ The Apache Druid indexing service is a highly-available, distributed service tha Indexing [tasks](../ingestion/tasks.md) are responsible for creating and [killing](../ingestion/tasks.md#kill) Druid [segments](../design/segments.md). -The indexing service is composed of three main components: [Peons](../design/peons.md) that can run a single task, [MiddleManagers](../design/middlemanager.md) that manage Peons, and an [Overlord](../design/overlord.md) that manages task distribution to MiddleManagers. -Overlords and MiddleManagers may run on the same process or across multiple processes, while MiddleManagers and Peons always run on the same process. +The indexing service is composed of three main components: [Peons](../design/peons.md) that can run a single task, [Middle Managers](../design/middlemanager.md) that manage Peons, and an [Overlord](../design/overlord.md) that manages task distribution to Middle Managers. +Overlords and Middle Managers may run on the same process or across multiple processes, while Middle Managers and Peons always run on the same process. Tasks are managed using API endpoints on the Overlord service. Please see [Tasks API](../api-reference/tasks-api.md) for more information. diff --git a/docs/design/metadata-storage.md b/docs/design/metadata-storage.md index 4f66115916d2..8071753f3e07 100644 --- a/docs/design/metadata-storage.md +++ b/docs/design/metadata-storage.md @@ -149,7 +149,7 @@ parameters across the cluster at runtime. ### Task-related tables -Task-related tables are created and used by the [Overlord](../design/overlord.md) and [MiddleManager](../design/middlemanager.md) when managing tasks. +Task-related tables are created and used by the [Overlord](../design/overlord.md) and [Middle Manager](../design/middlemanager.md) when managing tasks. ### Audit table diff --git a/docs/design/middlemanager.md b/docs/design/middlemanager.md index 28738295a058..9037b56a6e75 100644 --- a/docs/design/middlemanager.md +++ b/docs/design/middlemanager.md @@ -1,7 +1,7 @@ --- id: middlemanager -title: "MiddleManager service" -sidebar_label: "MiddleManager" +title: "Middle Manager service" +sidebar_label: "Middle Manager" --- -The MiddleManager service is a worker service that executes submitted tasks. MiddleManagers forward tasks to [Peons](../design/peons.md) that run in separate JVMs. -Druid uses separate JVMs for tasks to isolate resources and logs. Each Peon is capable of running only one task at a time, wheres a MiddleManager may have multiple Peons. +The Middle Manager service is a worker service that executes submitted tasks. Middle Managers forward tasks to [Peons](../design/peons.md) that run in separate JVMs. +Druid uses separate JVMs for tasks to isolate resources and logs. Each Peon is capable of running only one task at a time, whereas a Middle Manager may have multiple Peons. ## Configuration -For Apache Druid MiddleManager service configuration, see [MiddleManager and Peons](../configuration/index.md#middlemanager-and-peons). +For Apache Druid Middle Manager service configuration, see [Middle Manager and Peons](../configuration/index.md#middle-manager-and-peon). -For basic tuning guidance for the MiddleManager service, see [Basic cluster tuning](../operations/basic-cluster-tuning.md#middlemanager). +For basic tuning guidance for the Middle Manager service, see [Basic cluster tuning](../operations/basic-cluster-tuning.md#middle-manager). ## HTTP endpoints -For a list of API endpoints supported by the MiddleManager, see the [Service status API reference](../api-reference/service-status-api.md#middlemanager). +For a list of API endpoints supported by the Middle Manager, see the [Service status API reference](../api-reference/service-status-api.md#middle-manager). ## Running diff --git a/docs/design/overlord.md b/docs/design/overlord.md index 83be16db789e..d8458e750a95 100644 --- a/docs/design/overlord.md +++ b/docs/design/overlord.md @@ -25,8 +25,8 @@ sidebar_label: "Overlord" The Overlord service is responsible for accepting tasks, coordinating task distribution, creating locks around tasks, and returning statuses to callers. The Overlord can be configured to run in one of two modes - local or remote (local being default). -In local mode, the Overlord is also responsible for creating Peons for executing tasks. When running the Overlord in local mode, all MiddleManager and Peon configurations must be provided as well. -Local mode is typically used for simple workflows. In remote mode, the Overlord and MiddleManager are run in separate services and you can run each on a different server. +In local mode, the Overlord is also responsible for creating Peons for executing tasks. When running the Overlord in local mode, all Middle Manager and Peon configurations must be provided as well. +Local mode is typically used for simple workflows. In remote mode, the Overlord and Middle Manager are run in separate services and you can run each on a different server. This mode is recommended if you intend to use the indexing service as the single endpoint for all Druid indexing. ## Configuration @@ -41,7 +41,7 @@ For a list of API endpoints supported by the Overlord, please see the [Service s ## Blacklisted workers -If a MiddleManager has task failures above a threshold, the Overlord will blacklist these MiddleManagers. No more than 20% of the MiddleManagers can be blacklisted. Blacklisted MiddleManagers will be periodically whitelisted. +If a Middle Manager has task failures above a threshold, the Overlord will blacklist these Middle Managers. No more than 20% of the Middle Managers can be blacklisted. Blacklisted Middle Managers will be periodically whitelisted. The following variables can be used to set the threshold and blacklist timeouts. @@ -54,6 +54,6 @@ druid.indexer.runner.maxPercentageBlacklistWorkers ## Autoscaling -The autoscaling mechanisms currently in place are tightly coupled with our deployment infrastructure but the framework should be in place for other implementations. We are highly open to new implementations or extensions of the existing mechanisms. In our own deployments, MiddleManager services are Amazon AWS EC2 nodes and they are provisioned to register themselves in a [galaxy](https://github.com/ning/galaxy) environment. +The autoscaling mechanisms currently in place are tightly coupled with our deployment infrastructure but the framework should be in place for other implementations. We are highly open to new implementations or extensions of the existing mechanisms. In our own deployments, Middle Manager services are Amazon AWS EC2 nodes and they are provisioned to register themselves in a [galaxy](https://github.com/ning/galaxy) environment. -If autoscaling is enabled, new MiddleManagers may be added when a task has been in pending state for too long. MiddleManagers may be terminated if they have not run any tasks for a period of time. +If autoscaling is enabled, new Middle Managers may be added when a task has been in pending state for too long. Middle Managers may be terminated if they have not run any tasks for a period of time. diff --git a/docs/design/peons.md b/docs/design/peons.md index 8c2a73a069aa..b31bd8ec1a81 100644 --- a/docs/design/peons.md +++ b/docs/design/peons.md @@ -23,22 +23,22 @@ sidebar_label: "Peon" ~ under the License. --> -The Peon service is a task execution engine spawned by the MiddleManager. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the MiddleManager that spawned them. +The Peon service is a task execution engine spawned by the Middle Manager. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the Middle Manager that spawned them. ## Configuration For Apache Druid Peon configuration, see [Peon Query Configuration](../configuration/index.md#peon-query-configuration) and [Additional Peon Configuration](../configuration/index.md#additional-peon-configuration). -For basic tuning guidance for MiddleManager tasks, see [Basic cluster tuning](../operations/basic-cluster-tuning.md#task-configurations). +For basic tuning guidance for Middle Manager tasks, see [Basic cluster tuning](../operations/basic-cluster-tuning.md#task-configurations). ## HTTP endpoints -Peons run a single task in a single JVM. The MiddleManager is responsible for creating Peons for running tasks. +Peons run a single task in a single JVM. The Middle Manager is responsible for creating Peons for running tasks. Peons should rarely run on their own. ## Running -The Peon should seldom run separately from the MiddleManager, except for development purposes. +The Peon should seldom run separately from the Middle Manager, except for development purposes. ``` org.apache.druid.cli.Main internal peon diff --git a/docs/design/storage.md b/docs/design/storage.md index 85914e7ada3e..fd9783c92839 100644 --- a/docs/design/storage.md +++ b/docs/design/storage.md @@ -28,7 +28,7 @@ Druid stores data in datasources, which are similar to tables in a traditional R ![Segment timeline](../assets/druid-timeline.png) -A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each segment is created by a MiddleManager as mutable and uncommitted. Data is queryable as soon as it is added to an uncommitted segment. The segment building process accelerates later queries by producing a data file that is compact and indexed: +A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each segment is created by a Middle Manager as mutable and uncommitted. Data is queryable as soon as it is added to an uncommitted segment. The segment building process accelerates later queries by producing a data file that is compact and indexed: - Conversion to columnar format - Indexing with bitmap indexes @@ -37,7 +37,7 @@ A datasource may have anywhere from just a few segments, up to hundreds of thous - Bitmap compression for bitmap indexes - Type-aware compression for all columns -Periodically, segments are committed and published to [deep storage](deep-storage.md), become immutable, and move from MiddleManagers to the Historical services. An entry about the segment is also written to the [metadata store](metadata-storage.md). This entry is a self-describing bit of metadata about the segment, including things like the schema of the segment, its size, and its location on deep storage. These entries tell the Coordinator what data is available on the cluster. +Periodically, segments are committed and published to [deep storage](deep-storage.md), become immutable, and move from Middle Managers to the Historical services. An entry about the segment is also written to the [metadata store](metadata-storage.md). This entry is a self-describing bit of metadata about the segment, including things like the schema of the segment, its size, and its location on deep storage. These entries tell the Coordinator what data is available on the cluster. For details on the segment file format, see [segment files](segments.md). diff --git a/docs/design/zookeeper.md b/docs/design/zookeeper.md index eb8bea577412..a01f12ed74d9 100644 --- a/docs/design/zookeeper.md +++ b/docs/design/zookeeper.md @@ -41,7 +41,7 @@ The operations that happen over ZK are 1. [Coordinator](../design/coordinator.md) leader election 2. Segment "publishing" protocol from [Historical](../design/historical.md) 3. [Overlord](../design/overlord.md) leader election -4. [Overlord](../design/overlord.md) and [MiddleManager](../design/middlemanager.md) task management +4. [Overlord](../design/overlord.md) and [Middle Manager](../design/middlemanager.md) task management ## Coordinator Leader Election diff --git a/docs/development/extensions-contrib/gce-extensions.md b/docs/development/extensions-contrib/gce-extensions.md index 17a69c72f233..2a8da661547e 100644 --- a/docs/development/extensions-contrib/gce-extensions.md +++ b/docs/development/extensions-contrib/gce-extensions.md @@ -32,7 +32,7 @@ of GCE (MIG from now on). This choice has been made to ease the configuration of management. For this reason, in order to use this extension, the user must have created -1. An instance template with the right machine type and image to bu used to run the MiddleManager +1. An instance template with the right machine type and image to bu used to run the Middle Manager 2. A MIG that has been configured to use the instance template created in the point above Moreover, in order to be able to rescale the machines in the MIG, the Overlord must run with a service account @@ -98,6 +98,6 @@ for parameters other than the ones specified here, such as `selectStrategy` etc. - The module internally uses the [ListManagedInstances](https://cloud.google.com/compute/docs/reference/rest/v1/instanceGroupManagers/listManagedInstances) call from the API and, while the documentation of the API states that the call can be paged through using the `pageToken` argument, the responses to such call do not provide any `nextPageToken` to set such parameter. This means - that the extension can operate safely with a maximum of 500 MiddleManagers instances at any time (the maximum number + that the extension can operate safely with a maximum of 500 Middle Managers instances at any time (the maximum number of instances to be returned for each call). \ No newline at end of file diff --git a/docs/development/extensions-contrib/prometheus.md b/docs/development/extensions-contrib/prometheus.md index 37bcbd4e2e24..ddbd9da46437 100644 --- a/docs/development/extensions-contrib/prometheus.md +++ b/docs/development/extensions-contrib/prometheus.md @@ -51,7 +51,7 @@ All the configuration parameters for the Prometheus emitter are under `druid.emi ### Ports for colocated Druid processes -In certain instances, Druid processes may be colocated on the same host. For example, the Broker and Router may share the same server. Other colocated processes include the Historical and MiddleManager or the Coordinator and Overlord. When you have colocated processes, specify `druid.emitter.prometheus.port` separately for each process on each host. For example, even if the Broker and Router share the same host, the Broker runtime properties and the Router runtime properties each need to list `druid.emitter.prometheus.port`, and the port value for both must be different. +In certain instances, Druid processes may be colocated on the same host. For example, the Broker and Router may share the same server. Other colocated processes include the Historical and Middle Manager or the Coordinator and Overlord. When you have colocated processes, specify `druid.emitter.prometheus.port` separately for each process on each host. For example, even if the Broker and Router share the same host, the Broker runtime properties and the Router runtime properties each need to list `druid.emitter.prometheus.port`, and the port value for both must be different. ### Override properties for Peon Tasks diff --git a/docs/development/extensions-core/s3.md b/docs/development/extensions-core/s3.md index 9690c27e4cab..2565451c2725 100644 --- a/docs/development/extensions-core/s3.md +++ b/docs/development/extensions-core/s3.md @@ -117,7 +117,7 @@ The AWS SDK requires that a target region be specified. You can set these by us For example, to set the region to 'us-east-1' through system properties: * Add `-Daws.region=us-east-1` to the `jvm.config` file for all Druid services. -* Add `-Daws.region=us-east-1` to `druid.indexer.runner.javaOpts` in [Middle Manager configuration](../../configuration/index.md#middlemanager-configuration) so that the property will be passed to Peon (worker) processes. +* Add `-Daws.region=us-east-1` to `druid.indexer.runner.javaOpts` in [Middle Manager configuration](../../configuration/index.md#middle-manager-configuration) so that the property will be passed to Peon (worker) processes. ### Connecting to S3 configuration diff --git a/docs/ingestion/hadoop.md b/docs/ingestion/hadoop.md index 96373e275170..f238be3405ad 100644 --- a/docs/ingestion/hadoop.md +++ b/docs/ingestion/hadoop.md @@ -149,7 +149,7 @@ For example, using the static input paths: ``` You can also read from cloud storage such as Amazon S3 or Google Cloud Storage. -To do so, you need to install the necessary library under Druid's classpath in _all MiddleManager or Indexer processes_. +To do so, you need to install the necessary library under Druid's classpath in _all Middle Manager or Indexer processes_. For S3, you can run the below command to install the [Hadoop AWS module](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/). ```bash @@ -157,7 +157,7 @@ java -classpath "${DRUID_HOME}lib/*" org.apache.druid.cli.Main tools pull-deps - cp ${DRUID_HOME}/hadoop-dependencies/hadoop-aws/${HADOOP_VERSION}/hadoop-aws-${HADOOP_VERSION}.jar ${DRUID_HOME}/extensions/druid-hdfs-storage/ ``` -Once you install the Hadoop AWS module in all MiddleManager and Indexer processes, you can put +Once you install the Hadoop AWS module in all Middle Manager and Indexer processes, you can put your S3 paths in the inputSpec with the below job properties. For more configurations, see the [Hadoop AWS module](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/). @@ -175,8 +175,8 @@ For more configurations, see the [Hadoop AWS module](https://hadoop.apache.org/d ``` For Google Cloud Storage, you need to install [GCS connector jar](https://github.com/GoogleCloudPlatform/bigdata-interop/blob/master/gcs/INSTALL.md) -under `${DRUID_HOME}/hadoop-dependencies` in _all MiddleManager or Indexer processes_. -Once you install the GCS Connector jar in all MiddleManager and Indexer processes, you can put +under `${DRUID_HOME}/hadoop-dependencies` in _all Middle Manager or Indexer processes_. +Once you install the GCS Connector jar in all Middle Manager and Indexer processes, you can put your Google Cloud Storage paths in the inputSpec with the below job properties. For more configurations, see the [instructions to configure Hadoop](https://github.com/GoogleCloudPlatform/bigdata-interop/blob/master/gcs/INSTALL.md#configure-hadoop), [GCS core default](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/v2.0.0/gcs/conf/gcs-core-default.xml) diff --git a/docs/ingestion/index.md b/docs/ingestion/index.md index d500b1d9bc25..bc189f4b5924 100644 --- a/docs/ingestion/index.md +++ b/docs/ingestion/index.md @@ -27,7 +27,7 @@ Loading data in Druid is called _ingestion_ or _indexing_. When you ingest data your source system and stores it in data files called [_segments_](../design/segments.md). In general, segment files contain a few million rows each. -For most ingestion methods, the Druid [MiddleManager](../design/middlemanager.md) processes or the +For most ingestion methods, the Druid [Middle Manager](../design/middlemanager.md) processes or the [Indexer](../design/indexer.md) processes load your source data. The sole exception is Hadoop-based ingestion, which uses a Hadoop MapReduce job on YARN. diff --git a/docs/ingestion/input-sources.md b/docs/ingestion/input-sources.md index fb8e1f98c91f..c6ba2e5b49e9 100644 --- a/docs/ingestion/input-sources.md +++ b/docs/ingestion/input-sources.md @@ -875,7 +875,7 @@ Each of the SQL queries will be run in its own sub-task and thus for the above e Compared to the other native batch input sources, SQL input source behaves differently in terms of reading the input data. Therefore, consider the following points before using this input source in a production environment: -* During indexing, each sub-task would execute one of the SQL queries and the results are stored locally on disk. The sub-tasks then proceed to read the data from these local input files and generate segments. Presently, there isn’t any restriction on the size of the generated files and this would require the MiddleManagers or Indexers to have sufficient disk capacity based on the volume of data being indexed. +* During indexing, each sub-task would execute one of the SQL queries and the results are stored locally on disk. The sub-tasks then proceed to read the data from these local input files and generate segments. Presently, there isn’t any restriction on the size of the generated files and this would require the Middle Managers or Indexers to have sufficient disk capacity based on the volume of data being indexed. * Filtering the SQL queries based on the intervals specified in the `granularitySpec` can avoid unwanted data being retrieved and stored locally by the indexing sub-tasks. For example, if the `intervals` specified in the `granularitySpec` is `["2013-01-01/2013-01-02"]` and the SQL query is `SELECT * FROM table1`, `SqlInputSource` will read all the data for `table1` based on the query, even though only data between the intervals specified will be indexed into Druid. diff --git a/docs/ingestion/kafka-ingestion.md b/docs/ingestion/kafka-ingestion.md index 7eb393017d1c..5c4dc0856a80 100644 --- a/docs/ingestion/kafka-ingestion.md +++ b/docs/ingestion/kafka-ingestion.md @@ -36,7 +36,7 @@ This topic contains configuration information for the Kafka indexing service sup ## Setup -To use the Kafka indexing service, you must first load the `druid-kafka-indexing-service` extension on both the Overlord and the MiddleManager. See [Loading extensions](../configuration/extensions.md) for more information. +To use the Kafka indexing service, you must first load the `druid-kafka-indexing-service` extension on both the Overlord and the Middle Manager. See [Loading extensions](../configuration/extensions.md) for more information. ## Supervisor spec configuration @@ -426,7 +426,7 @@ For configuration properties shared across all streaming ingestion methods, refe Druid assigns Kafka partitions to each Kafka indexing task. A task writes the events it consumes from Kafka into a single segment for the segment granularity interval until it reaches one of the following limits: `maxRowsPerSegment`, `maxTotalRows`, or `intermediateHandoffPeriod`. At this point, the task creates a new partition for this segment granularity to contain subsequent events. -The Kafka indexing task also does incremental hand-offs. Therefore, segments become available as they are ready and you don't have to wait for all segments until the end of the task duration. When the task reaches one of `maxRowsPerSegment`, `maxTotalRows`, or `intermediateHandoffPeriod`, it hands off all the segments and creates a new set of segments for further events. This allows the task to run for longer durations without accumulating old segments locally on MiddleManager services. +The Kafka indexing task also does incremental hand-offs. Therefore, segments become available as they are ready and you don't have to wait for all segments until the end of the task duration. When the task reaches one of `maxRowsPerSegment`, `maxTotalRows`, or `intermediateHandoffPeriod`, it hands off all the segments and creates a new set of segments for further events. This allows the task to run for longer durations without accumulating old segments locally on Middle Manager services. The Kafka indexing service may still produce some small segments. For example, consider the following scenario: - Task duration is 4 hours. diff --git a/docs/ingestion/kinesis-ingestion.md b/docs/ingestion/kinesis-ingestion.md index 8ea2cfd4af35..2a855b7b7ff4 100644 --- a/docs/ingestion/kinesis-ingestion.md +++ b/docs/ingestion/kinesis-ingestion.md @@ -33,7 +33,7 @@ This topic contains configuration information for the Kinesis indexing service s ## Setup -To use the Kinesis indexing service, you must first load the `druid-kinesis-indexing-service` core extension on both the Overlord and the MiddleManager. See [Loading extensions](../configuration/extensions.md#loading-extensions) for more information. +To use the Kinesis indexing service, you must first load the `druid-kinesis-indexing-service` core extension on both the Overlord and the Middle Manager. See [Loading extensions](../configuration/extensions.md#loading-extensions) for more information. Review [Known issues](#known-issues) before deploying the `druid-kinesis-indexing-service` extension to production. @@ -249,7 +249,7 @@ At this point, the task creates a new shard for this segment granularity to cont The Kinesis indexing task also performs incremental hand-offs so that the segments created by the task are not held up until the task duration is over. When the task reaches one of the `maxRowsPerSegment`, `maxTotalRows`, or `intermediateHandoffPeriod` limits, it hands off all the segments and creates a new set of segments for further events. This allows the task to run for longer durations -without accumulating old segments locally on MiddleManager services. +without accumulating old segments locally on Middle Manager services. The Kinesis indexing service may still produce some small segments. For example, consider the following scenario: diff --git a/docs/ingestion/native-batch.md b/docs/ingestion/native-batch.md index 398fea9f69a9..be3219a5003e 100644 --- a/docs/ingestion/native-batch.md +++ b/docs/ingestion/native-batch.md @@ -55,7 +55,7 @@ The parallel task type `index_parallel` is a task for multi-threaded batch index The `index_parallel` task is a supervisor task that orchestrates the whole indexing process. The supervisor task splits the input data and creates worker tasks to process the individual portions of data. -Druid issues the worker tasks to the Overlord. The Overlord schedules and runs the workers on MiddleManagers or Indexers. After a worker task successfully processes the assigned input portion, it reports the resulting segment list to the Supervisor task. +Druid issues the worker tasks to the Overlord. The Overlord schedules and runs the workers on Middle Managers or Indexers. After a worker task successfully processes the assigned input portion, it reports the resulting segment list to the Supervisor task. The Supervisor task periodically checks the status of worker tasks. If a task fails, the Supervisor retries the task until the number of retries reaches the configured limit. If all worker tasks succeed, it publishes the reported segments at once and finalizes ingestion. @@ -369,11 +369,11 @@ the Parallel task splits the input data based on the split hint spec and assigns each split to a worker task. Each worker task (type `partial_index_generate`) reads the assigned split, and partitions rows by the time chunk from `segmentGranularity` (primary partition key) in the `granularitySpec` and then by the hash value of `partitionDimensions` (secondary partition key) in the `partitionsSpec`. The partitioned data is stored in local storage of -the [middleManager](../design/middlemanager.md) or the [indexer](../design/indexer.md). +the [middle Manager](../design/middlemanager.md) or the [indexer](../design/indexer.md). The `partial segment merge` phase is similar to the Reduce phase in MapReduce. The Parallel task spawns a new set of worker tasks (type `partial_index_generic_merge`) to merge the partitioned data created in the previous phase. Here, the partitioned data is shuffled based on -the time chunk and the hash value of `partitionDimensions` to be merged; each worker task reads the data falling in the same time chunk and the same hash value from multiple MiddleManager/Indexer processes and merges them to create the final segments. Finally, they push the final segments to the deep storage at once. +the time chunk and the hash value of `partitionDimensions` to be merged; each worker task reads the data falling in the same time chunk and the same hash value from multiple Middle Manager/Indexer processes and merges them to create the final segments. Finally, they push the final segments to the deep storage at once. ##### Hash partition function @@ -426,12 +426,12 @@ to create partitioned data. Each worker task reads a split created as in the pre partitions rows by the time chunk from the `segmentGranularity` (primary partition key) in the `granularitySpec` and then by the range partitioning found in the previous phase. The partitioned data is stored in local storage of -the [middleManager](../design/middlemanager.md) or the [indexer](../design/indexer.md). +the [Middle Manager](../design/middlemanager.md) or the [indexer](../design/indexer.md). In the `partial segment merge` phase, the parallel index task spawns a new set of worker tasks (type `partial_index_generic_merge`) to merge the partitioned data created in the previous phase. Here, the partitioned data is shuffled based on the time chunk and the value of `partitionDimension`; each worker task reads the segments -falling in the same partition of the same range from multiple MiddleManager/Indexer processes and merges +falling in the same partition of the same range from multiple Middle Manager/Indexer processes and merges them to create the final segments. Finally, they push the final segments to the deep storage. :::info diff --git a/docs/ingestion/streaming.md b/docs/ingestion/streaming.md index f0f777b6c627..942ed46b9821 100644 --- a/docs/ingestion/streaming.md +++ b/docs/ingestion/streaming.md @@ -28,7 +28,7 @@ Apache Druid can consume data streams from the following external streaming sour * Amazon Kinesis through the bundled [Kinesis indexing service](kinesis-ingestion.md) extension. Each indexing service provides real-time data ingestion with exactly-once stream processing guarantee. -To use either of the streaming ingestion methods, you must first load the associated extension on both the Overlord and the MiddleManager. See [Loading extensions](../configuration/extensions.md#loading-extensions) for more information. +To use either of the streaming ingestion methods, you must first load the associated extension on both the Overlord and the Middle Manager. See [Loading extensions](../configuration/extensions.md#loading-extensions) for more information. Streaming ingestion is controlled by a continuously running [supervisor](supervisor.md). The supervisor oversees the state of indexing tasks to coordinate handoffs, manage failures, and ensure that scalability and replication requirements are maintained. diff --git a/docs/ingestion/supervisor.md b/docs/ingestion/supervisor.md index 70939adb633d..d5293ae581f8 100644 --- a/docs/ingestion/supervisor.md +++ b/docs/ingestion/supervisor.md @@ -393,7 +393,7 @@ For information on how to terminate a supervisor by API, see [Supervisors: Termi ## Capacity planning -Indexing tasks run on MiddleManagers and are limited by the resources available in the MiddleManager cluster. In particular, you should make sure that you have sufficient worker capacity, configured using the +Indexing tasks run on Middle Managers and are limited by the resources available in the Middle Manager cluster. In particular, you should make sure that you have sufficient worker capacity, configured using the `druid.worker.capacity` property, to handle the configuration in the supervisor spec. Note that worker capacity is shared across all types of indexing tasks, so you should plan your worker capacity to handle your total indexing load, such as batch processing, streaming tasks, and merging tasks. If your workers run out of capacity, indexing tasks queue and wait for the next available worker. This may cause queries to return partial results but will not result in data loss, assuming the tasks run before the stream purges those sequence numbers. diff --git a/docs/ingestion/tasks.md b/docs/ingestion/tasks.md index 743a46cc8549..ba93e9c1f004 100644 --- a/docs/ingestion/tasks.md +++ b/docs/ingestion/tasks.md @@ -458,7 +458,7 @@ To enable batched segment allocation on the overlord, set `druid.indexer.tasklo The task context is used for various individual task configuration. Specify task context configurations in the `context` field of the ingestion spec. When configuring [automatic compaction](../data-management/automatic-compaction.md), set the task context configurations in `taskContext` rather than in `context`. -The settings get passed into the `context` field of the compaction tasks issued to MiddleManagers. +The settings get passed into the `context` field of the compaction tasks issued to Middle Managers. The following parameters apply to all task types. @@ -477,18 +477,18 @@ Logs are created by ingestion tasks as they run. You can configure Druid to push Once the task has been submitted to the Overlord it remains `WAITING` for locks to be acquired. Worker slot allocation is then `PENDING` until the task can actually start executing. -The task then starts creating logs in a local directory of the middle manager (or indexer) in a `log` directory for the specific `taskId` at [`druid.worker.baseTaskDirs`](../configuration/index.md#middlemanager-configuration). +The task then starts creating logs in a local directory of the middle manager (or indexer) in a `log` directory for the specific `taskId` at [`druid.worker.baseTaskDirs`](../configuration/index.md#middle-manager-configuration). When the task completes - whether it succeeds or fails - the middle manager (or indexer) will push the task log file into the location specified in [`druid.indexer.logs`](../configuration/index.md#task-logging). -Task logs on the Druid web console are retrieved via an [API](../api-reference/service-status-api.md#overlord) on the Overlord. It automatically detects where the log file is, either in the middleManager / indexer or in long-term storage, and passes it back. +Task logs on the Druid web console are retrieved via an [API](../api-reference/service-status-api.md#overlord) on the Overlord. It automatically detects where the log file is, either in the Middle Manager / indexer or in long-term storage, and passes it back. If you don't see the log file in long-term storage, it means either: -1. the middleManager / indexer failed to push the log file to deep storage or -2. the task did not complete. +- the Middle Manager / indexer failed to push the log file to deep storage or +- the task did not complete. -You can check the middleManager / indexer logs locally to see if there was a push failure. If there was not, check the Overlord's own process logs to see why the task failed before it started. +You can check the Middle Manager / indexer logs locally to see if there was a push failure. If there was not, check the Overlord's own process logs to see why the task failed before it started. :::info If you are running the indexing service in remote mode, the task logs must be stored in S3, Azure Blob Store, Google Cloud Storage or HDFS. diff --git a/docs/operations/basic-cluster-tuning.md b/docs/operations/basic-cluster-tuning.md index 7224276da373..32db4863e666 100644 --- a/docs/operations/basic-cluster-tuning.md +++ b/docs/operations/basic-cluster-tuning.md @@ -176,29 +176,29 @@ To estimate total memory usage of the Broker under these guidelines: - Heap: allocated heap size - Direct Memory: `(druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes` -### MiddleManager +### Middle Manager -The MiddleManager is a lightweight task controller/manager that launches Task processes, which perform ingestion work. +The Middle Manager is a lightweight task controller/manager that launches Task processes, which perform ingestion work. -#### MiddleManager heap sizing +#### Middle Manager heap sizing -The MiddleManager itself does not require much resources, you can set the heap to ~128MiB generally. +The Middle Manager itself does not require much resources, you can set the heap to ~128MiB generally. #### SSD storage -We recommend using SSDs for storage on the MiddleManagers, as the Tasks launched by MiddleManagers handle segment data stored on disk. +We recommend using SSDs for storage on the Middle Managers, as the Tasks launched by Middle Managers handle segment data stored on disk. #### Task Count -The number of tasks a MiddleManager can launch is controlled by the `druid.worker.capacity` setting. +The number of tasks a Middle Manager can launch is controlled by the `druid.worker.capacity` setting. The number of workers needed in your cluster depends on how many concurrent ingestion tasks you need to run for your use cases. The number of workers that can be launched on a given machine depends on the size of resources allocated per worker and available system resources. -You can allocate more MiddleManager machines to your cluster to add task capacity. +You can allocate more Middle Manager machines to your cluster to add task capacity. #### Task configurations -The following section below describes configuration for Tasks launched by the MiddleManager. The Tasks can be queried and perform ingestion workloads, so they require more resources than the MM. +The following section below describes configuration for Tasks launched by the Middle Manager. The Tasks can be queried and perform ingestion workloads, so they require more resources than the MM. ##### Task heap sizing @@ -249,7 +249,7 @@ To estimate total memory usage of a Task under these guidelines: - Heap: `1GiB + (2 * total size of lookup maps)` - Direct Memory: `(druid.processing.numThreads + druid.processing.numMergeBuffers + 1) * druid.processing.buffer.sizeBytes` -The total memory usage of the MiddleManager + Tasks: +The total memory usage of the Middle Manager + Tasks: `MM heap size + druid.worker.capacity * (single task memory usage)` @@ -435,7 +435,7 @@ Additionally, for large JVM heaps, here are a few Garbage Collection efficiency - Mount /tmp on tmpfs. See [The Four Month Bug: JVM statistics cause garbage collection pauses](http://www.evanjones.ca/jvm-mmap-pause.html). -- On Disk-IO intensive processes (e.g., Historical and MiddleManager), GC and Druid logs should be written to a different disk than where data is written. +- On Disk-IO intensive processes (e.g., Historical and Middle Manager), GC and Druid logs should be written to a different disk than where data is written. - Disable [Transparent Huge Pages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html). - Try disabling biased locking by using `-XX:-UseBiasedLocking` JVM flag. See [Logging Stop-the-world Pauses in JVM](https://dzone.com/articles/logging-stop-world-pauses-jvm). @@ -447,7 +447,7 @@ We recommend using UTC timezone for all your events and across your hosts, not j #### SSDs -SSDs are highly recommended for Historical, MiddleManager, and Indexer processes if you are not running a cluster that is entirely in memory. SSDs can greatly mitigate the time required to page data in and out of memory. +SSDs are highly recommended for Historical, Middle Manager, and Indexer processes if you are not running a cluster that is entirely in memory. SSDs can greatly mitigate the time required to page data in and out of memory. #### JBOD vs RAID @@ -455,11 +455,11 @@ Historical processes store large number of segments on Disk and support specifyi #### Swap space -We recommend _not_ using swap space for Historical, MiddleManager, and Indexer processes since due to the large number of memory mapped segment files can lead to poor and unpredictable performance. +We recommend _not_ using swap space for Historical, Middle Manager, and Indexer processes since due to the large number of memory mapped segment files can lead to poor and unpredictable performance. #### Linux limits -For Historical, MiddleManager, and Indexer processes (and for really large clusters, Broker processes), you might need to adjust some Linux system limits to account for a large number of open files, a large number of network connections, or a large number of memory mapped files. +For Historical, Middle Manager, and Indexer processes (and for really large clusters, Broker processes), you might need to adjust some Linux system limits to account for a large number of open files, a large number of network connections, or a large number of memory mapped files. ##### ulimit @@ -467,4 +467,4 @@ The limit on the number of open files can be set permanently by editing `/etc/se ##### max_map_count -Historical processes and to a lesser extent, MiddleManager and Indexer processes memory map segment files, so depending on the number of segments per server, `/proc/sys/vm/max_map_count` might also need to be adjusted. Depending on the variant of Linux, this might be done via `sysctl` by placing a file in `/etc/sysctl.d/` that sets `vm.max_map_count`. +Historical processes and to a lesser extent, Middle Manager and Indexer processes memory map segment files, so depending on the number of segments per server, `/proc/sys/vm/max_map_count` might also need to be adjusted. Depending on the variant of Linux, this might be done via `sysctl` by placing a file in `/etc/sysctl.d/` that sets `vm.max_map_count`. diff --git a/docs/operations/other-hadoop.md b/docs/operations/other-hadoop.md index f5e5839a907c..ba19a8326435 100644 --- a/docs/operations/other-hadoop.md +++ b/docs/operations/other-hadoop.md @@ -55,7 +55,7 @@ Generally, you should only set one of these parameters, not both. These properties can be set in either one of the following ways: - Using the task definition, e.g. add `"mapreduce.job.classloader": "true"` to the `jobProperties` of the `tuningConfig` of your indexing task (see the [Hadoop batch ingestion documentation](../ingestion/hadoop.md)). -- Using system properties, e.g. on the MiddleManager set `druid.indexer.runner.javaOpts=... -Dhadoop.mapreduce.job.classloader=true` in [Middle Manager configuration](../configuration/index.md#middlemanager-configuration). +- Using system properties, e.g. on the Middle Manager set `druid.indexer.runner.javaOpts=... -Dhadoop.mapreduce.job.classloader=true` in [Middle Manager configuration](../configuration/index.md#middle-manager-configuration). ### Overriding specific classes diff --git a/docs/operations/rolling-updates.md b/docs/operations/rolling-updates.md index b9471bc6b8e2..07cf3b4c8fc0 100644 --- a/docs/operations/rolling-updates.md +++ b/docs/operations/rolling-updates.md @@ -30,7 +30,7 @@ following order: 2. Middle Manager and Indexer (if any) 3. Broker 4. Router -5. Overlord (Note that you can upgrade the Overlord before any MiddleManager processes if you use [autoscaling-based replacement](#autoscaling-based-replacement).) +5. Overlord (Note that you can upgrade the Overlord before any Middle Manager processes if you use [autoscaling-based replacement](#autoscaling-based-replacement).) 6. Coordinator ( or merged Coordinator+Overlord ) If you need to do a rolling downgrade, reverse the order and start with the Coordinator processes. @@ -70,14 +70,14 @@ Middle Managers can be gracefully terminated using the "disable" API. This works even tasks that are not restorable. To prepare a Middle Manager for update, send a POST request to -`/druid/worker/v1/disable`. The Overlord will now no longer send tasks to +`/druid/worker/v1/disable`. The Overlord will now no longer send tasks to this Middle Manager. Tasks that have already started will run to completion. Current state can be checked -using `/druid/worker/v1/enabled` . +using `/druid/worker/v1/enabled` . -To view all existing tasks, send a GET request to `/druid/worker/v1/tasks`. +To view all existing tasks, send a GET request to `/druid/worker/v1/tasks`. When this list is empty, you can safely update the Middle Manager. After the Middle Manager starts back up, it is automatically enabled again. You can also manually enable Middle Managers by POSTing -to `/druid/worker/v1/enable`. +to `/druid/worker/v1/enable`. ### Autoscaling-based replacement diff --git a/docs/querying/groupbyquery.md b/docs/querying/groupbyquery.md index a11f82d124ad..2b6be9904e6b 100644 --- a/docs/querying/groupbyquery.md +++ b/docs/querying/groupbyquery.md @@ -337,7 +337,7 @@ dictionary that can spill to disk. The outer query is run on the Broker in a sin ### Configurations -This section describes the configurations for groupBy queries. You can set the runtime properties in the `runtime.properties` file on Broker, Historical, and MiddleManager processes. You can set the query context parameters through the [query context](query-context.md). +This section describes the configurations for groupBy queries. You can set the runtime properties in the `runtime.properties` file on Broker, Historical, and Middle Manager processes. You can set the query context parameters through the [query context](query-context.md). Supported runtime properties: diff --git a/docs/querying/query-execution.md b/docs/querying/query-execution.md index 21d6f5256f05..123b80738c44 100644 --- a/docs/querying/query-execution.md +++ b/docs/querying/query-execution.md @@ -47,7 +47,7 @@ by range using the [`single_dim` partitionsSpec](../ingestion/native-batch.md#pa the dimension used for partitioning. 3. The Broker, having pruned the list of segments for the query, forwards the query to data servers (like Historicals -and tasks running on MiddleManagers) that are currently serving those segments. +and tasks running on Middle Managers) that are currently serving those segments. 4. For all query types except [Scan](scan-query.md), data servers process each segment in parallel and generate partial results for each segment. The specific processing that is done depends on the query type. These partial results may be diff --git a/docs/querying/query-processing.md b/docs/querying/query-processing.md index c94a4bf9cecf..b4ecd006f072 100644 --- a/docs/querying/query-processing.md +++ b/docs/querying/query-processing.md @@ -27,8 +27,8 @@ This topic provides a high-level overview of how Apache Druid distributes and pr The general flow is as follows: 1. A query enters the [Broker](../design/broker.md) service, which identifies the segments with data that may pertain to that query. The list of segments is always pruned by time, and may also be pruned by other attributes depending on how the datasource is partitioned. -2. The Broker identifies which [Historical](../design/historical.md) and [MiddleManager](../design/middlemanager.md) services are serving those segments and distributes a rewritten subquery to each of the services. -3. The Historical and MiddleManager services execute each subquery and return results to the Broker. +2. The Broker identifies which [Historical](../design/historical.md) and [Middle Manager](../design/middlemanager.md) services are serving those segments and distributes a rewritten subquery to each of the services. +3. The Historical and Middle Manager services execute each subquery and return results to the Broker. 4. The Broker merges the partial results to get the final answer, which it returns to the original caller. Druid uses time and attribute pruning to minimize the data it must scan for each query. diff --git a/docs/release-info/upgrade-notes.md b/docs/release-info/upgrade-notes.md index 52adccabbc77..d3c6ce1cba3c 100644 --- a/docs/release-info/upgrade-notes.md +++ b/docs/release-info/upgrade-notes.md @@ -74,7 +74,7 @@ The following are the changes to the default values for the Coordinator service: #### `GoogleTaskLogs` upload buffer size -Changed the upload buffer size in `GoogleTaskLogs` to 1 MB instead of 15 MB to allow more uploads in parallel and prevent the MiddleManager service from running out of memory. +Changed the upload buffer size in `GoogleTaskLogs` to 1 MB instead of 15 MB to allow more uploads in parallel and prevent the Middle Manager service from running out of memory. [#16236](https://github.com/apache/druid/pull/16236) diff --git a/docs/tutorials/cluster.md b/docs/tutorials/cluster.md index f1bea59ff957..8c4ad3e12346 100644 --- a/docs/tutorials/cluster.md +++ b/docs/tutorials/cluster.md @@ -32,7 +32,7 @@ your needs. This simple cluster will feature: - A Master server to host the Coordinator and Overlord processes - - Two scalable, fault-tolerant Data servers running Historical and MiddleManager processes + - Two scalable, fault-tolerant Data servers running Historical and Middle Manager processes - A query server, hosting the Druid Broker and Router processes In production, we recommend deploying multiple Master servers and multiple Query servers in a fault-tolerant configuration based on your specific fault-tolerance needs, but you can get started quickly with one Master and one Query server and add more servers later. @@ -58,7 +58,7 @@ Example Master server configurations that have been sized for this hardware can #### Data server -Historicals and MiddleManagers can be colocated on the same server to handle the actual data in your cluster. These servers benefit greatly from CPU, RAM, +Historicals and Middle Managers can be colocated on the same server to handle the actual data in your cluster. These servers benefit greatly from CPU, RAM, and SSDs. In this example, we will be deploying the equivalent of two AWS [i3.4xlarge](https://aws.amazon.com/ec2/instance-types/i3/) instances. @@ -117,7 +117,7 @@ In a clustered deployment, having multiple Data servers is a good idea for fault When choosing the Data server hardware, you can choose a split factor `N`, divide the original CPU/RAM of the single-server deployment by `N`, and deploy `N` Data servers of reduced size in the new cluster. -Instructions for adjusting the Historical/MiddleManager configs for the split are described in a later section in this guide. +Instructions for adjusting the Historical/Middle Manager configs for the split are described in a later section in this guide. #### Query server @@ -324,7 +324,7 @@ You can copy your existing `coordinator-overlord` configs from the single-server #### Data -Suppose we are migrating from a single-server deployment that had 32 CPU and 256GiB RAM. In the old deployment, the following configurations for Historicals and MiddleManagers were applied: +Suppose we are migrating from a single-server deployment that had 32 CPU and 256GiB RAM. In the old deployment, the following configurations for Historicals and Middle Managers were applied: Historical (Single-server) @@ -334,7 +334,7 @@ druid.processing.numMergeBuffers=8 druid.processing.numThreads=31 ``` -MiddleManager (Single-server) +Middle Manager (Single-server) ``` druid.worker.capacity=8 @@ -351,7 +351,7 @@ Historical - `druid.processing.numMergeBuffers`: Divide the old value from the single-server deployment by the split factor - `druid.processing.buffer.sizeBytes`: Keep this unchanged -MiddleManager: +Middle Manager: - `druid.worker.capacity`: Divide the old value from the single-server deployment by the split factor - `druid.indexer.fork.property.druid.processing.numMergeBuffers`: Keep this unchanged @@ -368,7 +368,7 @@ druid.processing.numMergeBuffers=4 druid.processing.numThreads=15 ``` -New MiddleManager (on 2 Data servers) +New Middle Manager (on 2 Data servers) ``` druid.worker.capacity=4 @@ -460,8 +460,8 @@ bin/start-cluster-data-server You can add more Data servers as needed. :::info - For clusters with complex resource allocation needs, you can break apart Historicals and MiddleManagers and scale the components individually. - This also allows you take advantage of Druid's built-in MiddleManager autoscaling facility. + For clusters with complex resource allocation needs, you can break apart Historicals and Middle Managers and scale the components individually. + This also allows you take advantage of Druid's built-in Middle Manager autoscaling facility. ::: ## Start Query Server