diff --git a/docs/configuration/index.md b/docs/configuration/index.md index 4c0bb0c2b898..2b7c30c1eb80 100644 --- a/docs/configuration/index.md +++ b/docs/configuration/index.md @@ -867,6 +867,7 @@ These Coordinator static configurations can be defined in the `coordinator/runti |`druid.coordinator.loadqueuepeon.repeatDelay`|The start and repeat delay for the loadqueuepeon, which manages the load and drop of segments.|PT0.050S (50 ms)| |`druid.coordinator.asOverlord.enabled`|Boolean value for whether this Coordinator process should act like an Overlord as well. This configuration allows users to simplify a druid cluster by not having to deploy any standalone Overlord processes. If set to true, then Overlord console is available at `http://coordinator-host:port/console.html` and be sure to set `druid.coordinator.asOverlord.overlordService` also. See next.|false| |`druid.coordinator.asOverlord.overlordService`| Required, if `druid.coordinator.asOverlord.enabled` is `true`. This must be same value as `druid.service` on standalone Overlord processes and `druid.selectors.indexing.serviceName` on Middle Managers.|NULL| +|`druid.coordinator.centralizedSchemaManagement.enabled`|Boolean flag for enabling table schema building on the Coordinator.|false| ##### Metadata Management @@ -2002,7 +2003,7 @@ The Druid SQL server is configured through the following properties on the Broke |`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.md) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.md) will be used instead.|true| |`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries without filter condition on __time column will fail|false| |`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC| -|`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|false| +|`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST API will be invoked when broker needs published segments info.|true| |`druid.sql.planner.metadataSegmentPollPeriod`|How often to poll coordinator for published segments list if `druid.sql.planner.metadataSegmentCacheEnable` is set to true. Poll period is in milliseconds. |60000| |`druid.sql.planner.authorizeSystemTablesDirectly`|If true, Druid authorizes queries against any of the system schema tables (`sys` in SQL) as `SYSTEM_TABLE` resources which require `READ` access, in addition to permissions based content filtering.|false| |`druid.sql.planner.useNativeQueryExplain`|If true, `EXPLAIN PLAN FOR` will return the explain plan as a JSON representation of equivalent native query(s), else it will return the original version of explain plan generated by Calcite. It can be overridden per query with `useNativeQueryExplain` context key.|true| diff --git a/docs/operations/metrics.md b/docs/operations/metrics.md index dc5011752e76..df8ce218bde2 100644 --- a/docs/operations/metrics.md +++ b/docs/operations/metrics.md @@ -72,9 +72,9 @@ Most metric values reset each emission period, as specified in `druid.monitoring |`metadatacache/init/time`|Time taken to initialize the broker segment metadata cache. Useful to detect if brokers are taking too long to start||Depends on the number of segments.| |`metadatacache/refresh/count`|Number of segments to refresh in broker segment metadata cache.|`dataSource`| |`metadatacache/refresh/time`|Time taken to refresh segments in broker segment metadata cache.|`dataSource`| -|`metadatacache/schemaPoll/count`|Number of coordinator polls to fetch datasource schema.|`dataSource`| -|`metadatacache/schemaPoll/failed`|Number of failed coordinator polls to fetch datasource schema.|`dataSource`| -|`metadatacache/schemaPoll/time`|Time taken for coordinator polls to fetch datasource schema.|`dataSource`| +|`metadatacache/schemaPoll/count`|Number of coordinator polls to fetch datasource schema.|| +|`metadatacache/schemaPoll/failed`|Number of failed coordinator polls to fetch datasource schema.|| +|`metadatacache/schemaPoll/time`|Time taken for coordinator polls to fetch datasource schema.|| |`serverview/sync/healthy`|Sync status of the Broker with a segment-loading server such as a Historical or Peon. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled. This metric can be used in conjunction with `serverview/sync/unstableTime` to debug slow startup of Brokers.|`server`, `tier`|1 for fully synced servers, 0 otherwise| |`serverview/sync/unstableTime`|Time in milliseconds for which the Broker has been failing to sync with a segment-loading server. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled.|`server`, `tier`|Not emitted for synced servers.| |`subquery/rowLimit/count`|Number of subqueries whose results are materialized as rows (Java objects on heap).|This metric is only available if the `SubqueryCountStatsMonitor` module is included.| | @@ -361,7 +361,7 @@ These metrics are for the Druid Coordinator and are reset each time the Coordina |`serverview/init/time`|Time taken to initialize the coordinator server view.||Depends on the number of segments.| |`serverview/sync/healthy`|Sync status of the Coordinator with a segment-loading server such as a Historical or Peon. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled. You can use this metric in conjunction with `serverview/sync/unstableTime` to debug slow startup of the Coordinator.|`server`, `tier`|1 for fully synced servers, 0 otherwise| |`serverview/sync/unstableTime`|Time in milliseconds for which the Coordinator has been failing to sync with a segment-loading server. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled.|`server`, `tier`|Not emitted for synced servers.| -|`metadatacache/init/time`|Time taken to initialize the coordinator segment metadata cache.|`dataSource`|Depends on the number of segments.| +|`metadatacache/init/time`|Time taken to initialize the coordinator segment metadata cache.||Depends on the number of segments.| |`metadatacache/refresh/count`|Number of segments to refresh in coordinator segment metadata cache.|`dataSource`| |`metadatacache/refresh/time`|Time taken to refresh segments in coordinator segment metadata cache.|`dataSource`| diff --git a/services/src/main/java/org/apache/druid/cli/CliCoordinator.java b/services/src/main/java/org/apache/druid/cli/CliCoordinator.java index 01398c69ddab..d236bcb3dfcc 100644 --- a/services/src/main/java/org/apache/druid/cli/CliCoordinator.java +++ b/services/src/main/java/org/apache/druid/cli/CliCoordinator.java @@ -474,9 +474,9 @@ private static class CoordinatorSegmentMetadataCacheModule implements Module @Override public void configure(Binder binder) { - JsonConfigProvider.bind(binder, "druid.query.scheduler", QuerySchedulerProvider.class, Global.class); - JsonConfigProvider.bind(binder, "druid.query.default", DefaultQueryConfig.class); - JsonConfigProvider.bind(binder, "druid.query.segmentMetadata", SegmentMetadataQueryConfig.class); + JsonConfigProvider.bind(binder, "druid.coordinator.query.scheduler", QuerySchedulerProvider.class, Global.class); + JsonConfigProvider.bind(binder, "druid.coordinator.query.default", DefaultQueryConfig.class); + JsonConfigProvider.bind(binder, "druid.coordinator.query.segmentMetadata", SegmentMetadataQueryConfig.class); JsonConfigProvider.bind(binder, "druid.coordinator.internal.query.config", InternalQueryConfig.class); JsonConfigProvider.bind(binder, "druid.coordinator.query.retryPolicy", RetryQueryRunnerConfig.class);