From 1105837763e7da29fba9f430dbe45d7d01618b5d Mon Sep 17 00:00:00 2001 From: Kashif Faraz Date: Sat, 21 Dec 2024 19:24:58 +0530 Subject: [PATCH 1/3] Docs: Add details of cost balancer strategy --- docs/configuration/index.md | 2 +- docs/design/coordinator.md | 14 +++++++++++--- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/docs/configuration/index.md b/docs/configuration/index.md index 75ffab25a605..26601c6db40f 100644 --- a/docs/configuration/index.md +++ b/docs/configuration/index.md @@ -896,7 +896,7 @@ These Coordinator static configurations can be defined in the `coordinator/runti |`druid.coordinator.kill.ignoreDurationToRetain`|A way to override `druid.coordinator.kill.durationToRetain` and tell the coordinator that you do not care about the end date of unused segment intervals when it comes to killing them. If true, the coordinator considers all unused segments as eligible to be killed.|false| |`druid.coordinator.kill.bufferPeriod`|The amount of time that a segment must be unused before it is able to be permanently removed from metadata and deep storage. This can serve as a buffer period to prevent data loss if data ends up being needed after being marked unused.|`P30D`| |`druid.coordinator.kill.maxSegments`|The number of unused segments to kill per kill task. This number must be greater than 0. This only applies when `druid.coordinator.kill.on=true`.|100| -|`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy for the Coordinator to use to distribute segments among the Historical services. `diskNormalized` weights the costs according to the servers' disk usage ratios - there are known issues with this strategy distributing segments unevenly across the cluster. `random` distributes segments among services randomly.|`cost`| +|`druid.coordinator.balancer.strategy`|The [balancing strategy](../design/coordinator.md#balancing-segments-in-a-tier) used by the Coordinator to distribute segments among the Historical servers in a tier. The `cost` strategy distributes segments by minimizing a cost function, `diskNormalized` weights these costs with the disk usage ratios of the servers and `random` distributes segments randomly.|`cost`| |`druid.coordinator.loadqueuepeon.http.repeatDelay`|The start and repeat delay (in milliseconds) for the load queue peon, which manages the load/drop queue of segments for any server.|1 minute| |`druid.coordinator.loadqueuepeon.http.batchSize`|Number of segment load/drop requests to batch in one HTTP request. Note that it must be smaller than `druid.segmentCache.numLoadingThreads` config on Historical service.|1| |`druid.coordinator.asOverlord.enabled`|Boolean value for whether this Coordinator service should act like an Overlord as well. This configuration allows users to simplify a Druid cluster by not having to deploy any standalone Overlord services. If set to true, then Overlord console is available at `http://coordinator-host:port/console.html` and be sure to set `druid.coordinator.asOverlord.overlordService` also.|false| diff --git a/docs/design/coordinator.md b/docs/design/coordinator.md index 9cb2279d4976..1d27f3174153 100644 --- a/docs/design/coordinator.md +++ b/docs/design/coordinator.md @@ -79,11 +79,19 @@ On each run, the Coordinator determines and cleans up unneeded eternity tombston ## Segment availability -If a Historical service restarts or becomes unavailable for any reason, the Coordinator will notice a service has gone missing and treat all segments served by that service as being dropped. Given a sufficient period of time, the segments may be reassigned to other Historical services in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the Coordinator will not reassign a dropped segment. Hence, if a Historical service becomes unavailable and available again within a short period of time, the Historical service will start up and serve segments from its cache without any those segments being reassigned across the cluster. +If a Historical service restarts or becomes unavailable for any reason, the Coordinator notices that a service has gone missing and treats all segments served by that service as being dropped. The segments are then reassigned to other Historical services in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the Coordinator will not reassign a dropped segment. Hence, if a Historical service becomes unavailable and available again within a short period of time, the Historical service will start up and serve segments from its cache without any those segments being reassigned across the cluster. -## Balancing segment load +## Balancing segments in a tier -To ensure an even distribution of segments across Historical services in the cluster, the Coordinator service will find the total size of all segments being served by every Historical service each time the Coordinator runs. For every Historical service tier in the cluster, the Coordinator service will determine the Historical service with the highest utilization and the Historical service with the lowest utilization. The percent difference in utilization between the two services is computed, and if the result exceeds a certain threshold, a number of segments will be moved from the highest utilized service to the lowest utilized service. There is a configurable limit on the number of segments that can be moved from one service to another each time the Coordinator runs. Segments to be moved are selected at random and only moved if the resulting utilization calculation indicates the percentage difference between the highest and lowest servers has decreased. +Druid queries perform optimally when segments are distributed evenly across Historical services. An ideal distribution would ensure that all Historicals participate equally in the query load thus avoiding hot-spots in the system. To some extent, this can be achieved by keeping multiple replicas of a segment in a cluster. +But in a tier with several Historicals (or a low replication factor), segment replication is not sufficient to attain balance. +Thus, the Coordinator constantly monitors the set of segments present on each Historical in a tier and employs one of the following strategies to identify segments that may be moved from one Historical to another to retain balance. + +- `cost` (default): For a given segment in a tier, this strategy picks the server with the minimum "cost" of placing that segment. The cost is a function of the data interval of the segment and the data intervals of all the segments already present on the candidate server. In essence, this strategy tries to avoid placing segments with adjacent or overlapping data intervals on the same server. This is based on the premise that adjacent-interval segments are more likely to be used together in a query and placing them on the same server may lead to skewed cpu usages of Historicals. +- `diskNormalized`: A derivative of the `cost` strategy that weights the cost of placing a segment on a server with disk usage ratio loaded on the server. There are known issues with this strategy and is not recommended for a production cluster. +- `random`: Distributes segments randomly across servers. This is an experimental strategy and is not recommended for a production cluster. + +All of the above strategies prioritize moving segments from the Historical with the least available disk space. ## Automatic compaction From ec09ac896aeb249558294cff108eadd23a564350 Mon Sep 17 00:00:00 2001 From: Kashif Faraz Date: Sat, 21 Dec 2024 19:29:39 +0530 Subject: [PATCH 2/3] Minor fix --- docs/design/coordinator.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/design/coordinator.md b/docs/design/coordinator.md index 1d27f3174153..1c1e6068839a 100644 --- a/docs/design/coordinator.md +++ b/docs/design/coordinator.md @@ -88,7 +88,7 @@ But in a tier with several Historicals (or a low replication factor), segment re Thus, the Coordinator constantly monitors the set of segments present on each Historical in a tier and employs one of the following strategies to identify segments that may be moved from one Historical to another to retain balance. - `cost` (default): For a given segment in a tier, this strategy picks the server with the minimum "cost" of placing that segment. The cost is a function of the data interval of the segment and the data intervals of all the segments already present on the candidate server. In essence, this strategy tries to avoid placing segments with adjacent or overlapping data intervals on the same server. This is based on the premise that adjacent-interval segments are more likely to be used together in a query and placing them on the same server may lead to skewed cpu usages of Historicals. -- `diskNormalized`: A derivative of the `cost` strategy that weights the cost of placing a segment on a server with disk usage ratio loaded on the server. There are known issues with this strategy and is not recommended for a production cluster. +- `diskNormalized`: A derivative of the `cost` strategy that weights the cost of placing a segment on a server with the disk usage ratio of the server. There are known issues with this strategy and is not recommended for a production cluster. - `random`: Distributes segments randomly across servers. This is an experimental strategy and is not recommended for a production cluster. All of the above strategies prioritize moving segments from the Historical with the least available disk space. From 073e2dcd0139470a559b47be84f044bf666779f5 Mon Sep 17 00:00:00 2001 From: Kashif Faraz Date: Sun, 22 Dec 2024 08:07:49 +0530 Subject: [PATCH 3/3] Address feedback --- docs/design/coordinator.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/design/coordinator.md b/docs/design/coordinator.md index 1c1e6068839a..bc4c5ebc1cba 100644 --- a/docs/design/coordinator.md +++ b/docs/design/coordinator.md @@ -79,7 +79,7 @@ On each run, the Coordinator determines and cleans up unneeded eternity tombston ## Segment availability -If a Historical service restarts or becomes unavailable for any reason, the Coordinator notices that a service has gone missing and treats all segments served by that service as being dropped. The segments are then reassigned to other Historical services in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the Coordinator will not reassign a dropped segment. Hence, if a Historical service becomes unavailable and available again within a short period of time, the Historical service will start up and serve segments from its cache without any those segments being reassigned across the cluster. +If a Historical service restarts or becomes unavailable for any reason, the Coordinator notices that a service has gone missing and treats all segments served by that service as being dropped. The segments are then reassigned to other Historical services in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. The lifetime represents a period of time in which the Coordinator will not reassign a dropped segment. Hence, if a Historical service becomes unavailable and available again within a short period of time, the Historical service will start up and serve segments from its cache without any of those segments being reassigned across the cluster. ## Balancing segments in a tier @@ -87,7 +87,7 @@ Druid queries perform optimally when segments are distributed evenly across Hist But in a tier with several Historicals (or a low replication factor), segment replication is not sufficient to attain balance. Thus, the Coordinator constantly monitors the set of segments present on each Historical in a tier and employs one of the following strategies to identify segments that may be moved from one Historical to another to retain balance. -- `cost` (default): For a given segment in a tier, this strategy picks the server with the minimum "cost" of placing that segment. The cost is a function of the data interval of the segment and the data intervals of all the segments already present on the candidate server. In essence, this strategy tries to avoid placing segments with adjacent or overlapping data intervals on the same server. This is based on the premise that adjacent-interval segments are more likely to be used together in a query and placing them on the same server may lead to skewed cpu usages of Historicals. +- `cost` (default): For a given segment in a tier, this strategy picks the server with the minimum "cost" of placing that segment. The cost is a function of the data interval of the segment and the data intervals of all the segments already present on the candidate server. In essence, this strategy tries to avoid placing segments with adjacent or overlapping data intervals on the same server. This is based on the premise that adjacent-interval segments are more likely to be used together in a query and placing them on the same server may lead to skewed CPU usages of Historicals. - `diskNormalized`: A derivative of the `cost` strategy that weights the cost of placing a segment on a server with the disk usage ratio of the server. There are known issues with this strategy and is not recommended for a production cluster. - `random`: Distributes segments randomly across servers. This is an experimental strategy and is not recommended for a production cluster.