Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relocating Table Schema Building: Shifting from Brokers to Coordinator for Improved Efficiency #14985

Merged
merged 103 commits into from
Nov 4, 2023
Merged
Show file tree
Hide file tree
Changes from 93 commits
Commits
Show all changes
103 commits
Select commit Hold shift + click to select a range
294556d
move smc to coordinator
findingrish Aug 27, 2023
d1c0dca
refactor CoordinatorServerView
findingrish Aug 28, 2023
a48d2fe
minor change
findingrish Aug 28, 2023
5e7756f
Revert "refactor CoordinatorServerView"
findingrish Aug 28, 2023
a9ff640
Draft changes for the coordinator to conditionally build smc, and ref…
findingrish Sep 4, 2023
105bda9
Move schema querying logic to BrokerSegmentMetadataCache
findingrish Sep 6, 2023
3aad095
Fix dataSource schema on coordinator and minor renaming
findingrish Sep 6, 2023
14baf19
cleanup and fix some tests
findingrish Sep 7, 2023
857f056
Port tests and test build failure
findingrish Sep 7, 2023
95389b1
Fix unit tests and add test for getAllUsedSegments
findingrish Sep 8, 2023
fb72888
Merge remote-tracking branch 'origin/master' into coordinator_builds_…
findingrish Sep 8, 2023
bc8396c
minor change
findingrish Sep 8, 2023
fbab4c8
Remove logic to refactor sys segments table building logic
findingrish Sep 8, 2023
151b0b1
undo changes in SegmentsMetadataManager
findingrish Sep 9, 2023
8dbea5b
Minor code changes and add multiple tests
findingrish Sep 9, 2023
e630b9b
Add test for QueryableCoordinatorServerViewTest
findingrish Sep 11, 2023
17c3514
Add test for BrokerSegmentMetadataCache
findingrish Sep 11, 2023
f0baf33
minor code changes and fix checkstyle issues
findingrish Sep 11, 2023
f18c060
Fix intellij inspections
findingrish Sep 11, 2023
03383e6
Fix QueryableCoordinatorServerView test
findingrish Sep 11, 2023
dc6aa6e
Merge remote-tracking branch 'origin/master' into coordinator_builds_…
findingrish Sep 11, 2023
e5c4b39
Complete tests for SMC to verify DataSourceInformation
findingrish Sep 11, 2023
45378d5
Add comments
findingrish Sep 11, 2023
2327b12
Refactor SegmentMetadataCacheTest and BrokerSegmentMetadataCacheTest
findingrish Sep 11, 2023
d4ece6a
Test fetching ds schema from coordinator in BrokerSegmentMetadataCach…
findingrish Sep 12, 2023
eb1771f
fix checkstyle issue
findingrish Sep 12, 2023
1440dac
Add test for QueryableCoordinatorServerView
findingrish Sep 12, 2023
10068b6
Fix SegmentStatusInClusterTest
findingrish Sep 12, 2023
032734a
Address intellij inspection
findingrish Sep 12, 2023
9a04173
Merge remote-tracking branch 'origin/master' into coordinator_builds_…
findingrish Sep 12, 2023
80f4424
Add undeclared dependency in server module
findingrish Sep 12, 2023
33a8dd5
Remove enabled field from SegmentMetadataCacheConfig
findingrish Sep 12, 2023
7a7ca55
Add class to manage druid table information in SegmentMetadataCache, …
findingrish Sep 13, 2023
eb6a145
Merge remote-tracking branch 'origin/master' into coordinator_builds_…
findingrish Sep 13, 2023
b9fb83d
Minor refactoring in SegmentMetadataCache
findingrish Sep 13, 2023
aa2bfe7
Make SegmentMetadataCache generic
findingrish Sep 13, 2023
e97dcda
Add a generic abstract class for segment metadata cache
findingrish Sep 13, 2023
7badce1
Rename SegmentMetadataCache to CoordinatorSegmentMetadataCache
findingrish Sep 13, 2023
25cdce6
Rename PhysicalDataSourceMetadataBuilder to PhysicalDataSourceMetadat…
findingrish Sep 13, 2023
5f5ad18
Fix json property key name in DataSourceInformation
findingrish Sep 13, 2023
08e949e
Add validation in MetadataResource#getAllUsedSegments, update javadocs
findingrish Sep 14, 2023
80fc09d
Minor changes
findingrish Sep 14, 2023
4217cd8
Minor change
findingrish Sep 14, 2023
8b7e483
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 14, 2023
d6ac350
Update base property name for query config classes in Coordinator
findingrish Sep 14, 2023
533236b
Log ds schema change when polling from coordinator
findingrish Sep 15, 2023
70f0888
update the logic to determine is_active status in segments table for …
findingrish Sep 15, 2023
a176bfe
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 15, 2023
b32dfd6
Update the logic to set numRows in the sys segments table, add comments
findingrish Sep 15, 2023
17417b5
Rename config druid.coordinator.segmentMetadataCache.enabled to druid…
findingrish Sep 15, 2023
6a395a9
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 18, 2023
907ace3
Report cache init time irrespective of the awaitInitializationOnStart…
findingrish Sep 20, 2023
cf68c38
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 20, 2023
441f37a
Report metric for fetching schema from coordinator
findingrish Sep 20, 2023
bd5b048
Add auth check in api to return dataSourceInformation, report metrics…
findingrish Sep 21, 2023
933d8d1
Fix bug in Coordinator api to return dataSourceInformation
findingrish Sep 21, 2023
9e7e364
Minor change
findingrish Sep 21, 2023
e7356ce
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 22, 2023
5d16148
Address comments around docs, minor renaming
findingrish Sep 23, 2023
d8884be
Remove null check from MetadataResource#getDataSourceInformation
findingrish Sep 23, 2023
0f0805a
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 23, 2023
e129d3e
Install cache module in Coordinator, if feature is enabled and beOver…
findingrish Sep 25, 2023
b4042c6
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Sep 29, 2023
01c27c9
Minor change in QueryableCoordinatorServerView
findingrish Sep 29, 2023
87c9873
Remove QueryableCoordinatorServerView, add a new QuerySegmentWalker i…
findingrish Oct 14, 2023
971b347
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Oct 14, 2023
89d3845
fix build
findingrish Oct 14, 2023
270dbd5
fix build
findingrish Oct 14, 2023
2da23b8
Fix spelling, intellij-inspection, codeql bug
findingrish Oct 14, 2023
6f568a6
undo some changes in CachingClusteredClientTest
findingrish Oct 14, 2023
fe229c0
minor changes
findingrish Oct 14, 2023
473b25c
Fix typo in metric name
findingrish Oct 14, 2023
39fb248
temporarily enable feature on ITs
findingrish Oct 15, 2023
cac695a
fix checkstyle issue
findingrish Oct 15, 2023
eb3e3c1
Changes in CliCoordinator to conditionally add segment metadata cache…
findingrish Oct 15, 2023
30438f4
temporary changes to debug IT failure
findingrish Oct 16, 2023
e88ad00
revert temporary changes in gha
findingrish Oct 16, 2023
61d130b
revert temporary changes to run ITs with this feature
findingrish Oct 16, 2023
2e4c45b
update docs with the config for enabling feature
findingrish Oct 16, 2023
255cf2c
update docs with the config for enabling feature
findingrish Oct 16, 2023
1a6dfc5
Add IT for the feature
findingrish Oct 17, 2023
a961501
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Oct 17, 2023
2e65726
Merge branch 'coordinator_builds_ds_schema' of github.com:findingrish…
findingrish Oct 17, 2023
4b51c42
Changes in BrokerSegmentMetadataCache to poll schema for all the loca…
findingrish Oct 20, 2023
a4e2097
Address review comments
findingrish Oct 26, 2023
3ca03c9
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Oct 26, 2023
902abd3
Run DruidSchemaInternRowSignatureBenchmark using BrokerSegmentMetadat…
findingrish Oct 26, 2023
bcab458
Address feedback
findingrish Oct 26, 2023
32a4065
Simplify logic for setting isRealtime in sys segments table
findingrish Oct 26, 2023
6b04ee7
Remove forbidden api invocation
findingrish Oct 26, 2023
cb93e43
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Oct 26, 2023
152c480
Debug log when coordinator poll fails
findingrish Oct 26, 2023
80c6d26
Fix CoordinatorSegmentMetadataCacheTest
findingrish Oct 27, 2023
9cca98e
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Oct 28, 2023
bb69cde
Minor changes
findingrish Oct 28, 2023
553df65
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Oct 31, 2023
bd6a5ef
Synchronisation in SegmentLoadInfo
findingrish Oct 31, 2023
23ff740
Add comments
findingrish Oct 31, 2023
3de534f
Remove explicit synchronisation from SegmentLoadInfo#pickOne
findingrish Nov 1, 2023
9796bb8
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Nov 1, 2023
dc0b4ab
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Nov 3, 2023
571af64
Minor changes
findingrish Nov 3, 2023
98a48d3
Merge remote-tracking branch 'upstream/master' into coordinator_build…
findingrish Nov 4, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/standard-its.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ jobs:
strategy:
fail-fast: false
matrix:
testing_group: [query, query-retry, query-error, security, high-availability]
testing_group: [query, query-retry, query-error, security, high-availability, centralized-table-schema]
uses: ./.github/workflows/reusable-standard-its.yml
if: ${{ needs.changes.outputs.core == 'true' || needs.changes.outputs.common-extensions == 'true' }}
with:
Expand Down Expand Up @@ -195,6 +195,6 @@ jobs:
with:
build_jdk: 8
runtime_jdk: 8
testing_groups: -DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,query-error,realtime-index,security,ldap-security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,hadoop-azure-to-hdfs-deep-storage,hadoop-gcs-to-gcs-deep-storage,hadoop-gcs-to-hdfs-deep-storage,aliyun-oss-deep-storage,append-ingestion,compaction,high-availability,upgrade,shuffle-deep-store,custom-coordinator-duties
testing_groups: -DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,query-error,realtime-index,security,ldap-security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,hadoop-azure-to-hdfs-deep-storage,hadoop-gcs-to-gcs-deep-storage,hadoop-gcs-to-hdfs-deep-storage,aliyun-oss-deep-storage,append-ingestion,compaction,high-availability,upgrade,shuffle-deep-store,custom-coordinator-duties,centralized-table-schema
use_indexer: ${{ matrix.indexer }}
group: other
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,9 @@
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Lists;
import org.apache.druid.client.BrokerInternalQueryConfig;
import org.apache.druid.client.InternalQueryConfig;
import org.apache.druid.client.TimelineServerView;
import org.apache.druid.client.coordinator.NoopCoordinatorClient;
import org.apache.druid.java.util.common.Intervals;
import org.apache.druid.java.util.common.guava.Sequence;
import org.apache.druid.java.util.common.guava.Sequences;
Expand All @@ -37,9 +38,9 @@
import org.apache.druid.server.coordination.ServerType;
import org.apache.druid.server.metrics.NoopServiceEmitter;
import org.apache.druid.server.security.Escalator;
import org.apache.druid.sql.calcite.planner.PlannerConfig;
import org.apache.druid.sql.calcite.planner.SegmentMetadataCacheConfig;
import org.apache.druid.sql.calcite.schema.SegmentMetadataCache;
import org.apache.druid.sql.calcite.schema.BrokerSegmentMetadataCache;
import org.apache.druid.sql.calcite.schema.BrokerSegmentMetadataCacheConfig;
import org.apache.druid.sql.calcite.schema.PhysicalDatasourceMetadataFactory;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.SegmentId;
import org.apache.druid.timeline.partition.LinearShardSpec;
Expand Down Expand Up @@ -71,27 +72,26 @@ public class DruidSchemaInternRowSignatureBenchmark
{
private SegmentMetadataCacheForBenchmark cache;

private static class SegmentMetadataCacheForBenchmark extends SegmentMetadataCache
private static class SegmentMetadataCacheForBenchmark extends BrokerSegmentMetadataCache
{
public SegmentMetadataCacheForBenchmark(
final QueryLifecycleFactory queryLifecycleFactory,
final TimelineServerView serverView,
final SegmentManager segmentManager,
final JoinableFactory joinableFactory,
final PlannerConfig config,
final Escalator escalator,
final BrokerInternalQueryConfig brokerInternalQueryConfig
final InternalQueryConfig brokerInternalQueryConfig
)
{
super(
queryLifecycleFactory,
serverView,
segmentManager,
joinableFactory,
SegmentMetadataCacheConfig.create(),
BrokerSegmentMetadataCacheConfig.create(),
escalator,
brokerInternalQueryConfig,
new NoopServiceEmitter()
new NoopServiceEmitter(),
new PhysicalDatasourceMetadataFactory(joinableFactory, segmentManager),
new NoopCoordinatorClient()
);
}

Expand All @@ -109,7 +109,7 @@ public void addSegment(final DruidServerMetadata server, final DataSegment segme
}

@Override
protected Sequence<SegmentAnalysis> runSegmentMetadataQuery(Iterable<SegmentId> segments)
public Sequence<SegmentAnalysis> runSegmentMetadataQuery(Iterable<SegmentId> segments)
{
final int numColumns = 1000;
LinkedHashMap<String, ColumnAnalysis> columnToAnalysisMap = new LinkedHashMap<>();
Expand Down Expand Up @@ -178,10 +178,10 @@ public void setup()
EasyMock.mock(TimelineServerView.class),
null,
null,
EasyMock.mock(PlannerConfig.class),
null,
null
);

DruidServerMetadata serverMetadata = new DruidServerMetadata(
"dummy",
"dummy",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@
import org.apache.druid.segment.generator.GeneratorSchemaInfo;
import org.apache.druid.segment.generator.SegmentGenerator;
import org.apache.druid.server.QueryStackTests;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.server.security.AuthConfig;
import org.apache.druid.server.security.AuthTestUtils;
import org.apache.druid.sql.calcite.aggregation.ApproxCountDistinctSqlAggregator;
Expand All @@ -63,7 +64,6 @@
import org.apache.druid.sql.calcite.run.SqlEngine;
import org.apache.druid.sql.calcite.schema.DruidSchemaCatalog;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
import org.openjdk.jmh.annotations.Benchmark;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
import org.apache.druid.segment.generator.GeneratorSchemaInfo;
import org.apache.druid.segment.generator.SegmentGenerator;
import org.apache.druid.server.QueryStackTests;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.server.security.AuthConfig;
import org.apache.druid.server.security.AuthTestUtils;
import org.apache.druid.sql.calcite.SqlVectorizedExpressionSanityTest;
Expand All @@ -48,7 +49,6 @@
import org.apache.druid.sql.calcite.run.SqlEngine;
import org.apache.druid.sql.calcite.schema.DruidSchemaCatalog;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
import org.openjdk.jmh.annotations.Benchmark;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
import org.apache.druid.segment.transform.ExpressionTransform;
import org.apache.druid.segment.transform.TransformSpec;
import org.apache.druid.server.QueryStackTests;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.server.security.AuthConfig;
import org.apache.druid.server.security.AuthTestUtils;
import org.apache.druid.sql.calcite.SqlVectorizedExpressionSanityTest;
Expand All @@ -57,7 +58,6 @@
import org.apache.druid.sql.calcite.run.SqlEngine;
import org.apache.druid.sql.calcite.schema.DruidSchemaCatalog;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
import org.openjdk.jmh.annotations.Benchmark;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
import org.apache.druid.segment.generator.GeneratorSchemaInfo;
import org.apache.druid.segment.generator.SegmentGenerator;
import org.apache.druid.server.QueryStackTests;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.server.security.AuthConfig;
import org.apache.druid.server.security.AuthTestUtils;
import org.apache.druid.sql.calcite.planner.CalciteRulesManager;
Expand All @@ -49,7 +50,6 @@
import org.apache.druid.sql.calcite.run.SqlEngine;
import org.apache.druid.sql.calcite.schema.DruidSchemaCatalog;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
import org.openjdk.jmh.annotations.Benchmark;
Expand Down
12 changes: 12 additions & 0 deletions docs/api-reference/legacy-metadata-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,10 +116,18 @@ Returns a list of all segments for one or more specific datasources enabled in t

Returns a list of all segments for each datasource with the full segment metadata and an extra field `overshadowed`.

`GET /druid/coordinator/v1/metadata/segments?includeOvershadowedStatus&includeRealtimeSegments`
findingrish marked this conversation as resolved.
Show resolved Hide resolved

Additionally, returns the realtime segments for all datasources, with the full segment metadata and extra fields `overshadwed`,`realtime` & `numRows`.

`GET /druid/coordinator/v1/metadata/segments?includeOvershadowedStatus&datasources={dataSourceName1}&datasources={dataSourceName2}`

Returns a list of all segments for one or more specific datasources with the full segment metadata and an extra field `overshadowed`.

`GET /druid/coordinator/v1/metadata/segments?includeOvershadowedStatus&includeRealtimeSegments&datasources={dataSourceName1}&datasources={dataSourceName2}`

Additionally, returns the realtime segments for the specified datasources, with the full segment metadata and extra fields `overshadwed`,`realtime` & `numRows`.

`GET /druid/coordinator/v1/metadata/datasources`

Returns a list of the names of datasources with at least one used segment in the cluster, retrieved from the metadata database. Users should call this API to get the eventual state that the system will be in.
Expand Down Expand Up @@ -166,6 +174,10 @@ Returns a list of all segments, overlapping with any of given intervals, for a

Returns a list of all segments, overlapping with any of given intervals, for a datasource with the full segment metadata as stored in the metadata store. Request body is array of string ISO 8601 intervals like `[interval1, interval2,...]`&mdash;for example, `["2012-01-01T00:00:00.000/2012-01-03T00:00:00.000", "2012-01-05T00:00:00.000/2012-01-07T00:00:00.000"]`.

`POST /druid/coordinator/v1/metadata/dataSourceInformation`

Returns information about the specified datasources, including the datasource schema.

<a name="coordinator-datasources"></a>

## Datasources
Expand Down
1 change: 1 addition & 0 deletions docs/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -867,6 +867,7 @@ These Coordinator static configurations can be defined in the `coordinator/runti
|`druid.coordinator.loadqueuepeon.repeatDelay`|The start and repeat delay for the loadqueuepeon, which manages the load and drop of segments.|PT0.050S (50 ms)|
|`druid.coordinator.asOverlord.enabled`|Boolean value for whether this Coordinator process should act like an Overlord as well. This configuration allows users to simplify a druid cluster by not having to deploy any standalone Overlord processes. If set to true, then Overlord console is available at `http://coordinator-host:port/console.html` and be sure to set `druid.coordinator.asOverlord.overlordService` also. See next.|false|
|`druid.coordinator.asOverlord.overlordService`| Required, if `druid.coordinator.asOverlord.enabled` is `true`. This must be same value as `druid.service` on standalone Overlord processes and `druid.selectors.indexing.serviceName` on Middle Managers.|NULL|
|`druid.coordinator.centralizedTableSchema.enabled`|Boolean flag for enabling table schema building on the Coordinator.|false|

##### Metadata Management

Expand Down
6 changes: 6 additions & 0 deletions docs/operations/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,9 @@ Most metric values reset each emission period, as specified in `druid.monitoring
|`metadatacache/init/time`|Time taken to initialize the broker segment metadata cache. Useful to detect if brokers are taking too long to start||Depends on the number of segments.|
|`metadatacache/refresh/count`|Number of segments to refresh in broker segment metadata cache.|`dataSource`|
|`metadatacache/refresh/time`|Time taken to refresh segments in broker segment metadata cache.|`dataSource`|
|`metadatacache/schemaPoll/count`|Number of coordinator polls to fetch datasource schema.||
|`metadatacache/schemaPoll/failed`|Number of failed coordinator polls to fetch datasource schema.||
|`metadatacache/schemaPoll/time`|Time taken for coordinator polls to fetch datasource schema.||
|`serverview/sync/healthy`|Sync status of the Broker with a segment-loading server such as a Historical or Peon. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled. This metric can be used in conjunction with `serverview/sync/unstableTime` to debug slow startup of Brokers.|`server`, `tier`|1 for fully synced servers, 0 otherwise|
|`serverview/sync/unstableTime`|Time in milliseconds for which the Broker has been failing to sync with a segment-loading server. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled.|`server`, `tier`|Not emitted for synced servers.|
|`subquery/rowLimit/count`|Number of subqueries whose results are materialized as rows (Java objects on heap).|This metric is only available if the `SubqueryCountStatsMonitor` module is included.| |
Expand Down Expand Up @@ -358,6 +361,9 @@ These metrics are for the Druid Coordinator and are reset each time the Coordina
|`serverview/init/time`|Time taken to initialize the coordinator server view.||Depends on the number of segments.|
|`serverview/sync/healthy`|Sync status of the Coordinator with a segment-loading server such as a Historical or Peon. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled. You can use this metric in conjunction with `serverview/sync/unstableTime` to debug slow startup of the Coordinator.|`server`, `tier`|1 for fully synced servers, 0 otherwise|
|`serverview/sync/unstableTime`|Time in milliseconds for which the Coordinator has been failing to sync with a segment-loading server. Emitted only when [HTTP-based server view](../configuration/index.md#segment-management) is enabled.|`server`, `tier`|Not emitted for synced servers.|
|`metadatacache/init/time`|Time taken to initialize the coordinator segment metadata cache.||Depends on the number of segments.|
|`metadatacache/refresh/count`|Number of segments to refresh in coordinator segment metadata cache.|`dataSource`|
|`metadatacache/refresh/time`|Time taken to refresh segments in coordinator segment metadata cache.|`dataSource`|

cryptoe marked this conversation as resolved.
Show resolved Hide resolved
## General Health

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@
import org.apache.druid.segment.incremental.IncrementalIndexSchema;
import org.apache.druid.segment.join.JoinableFactoryWrapper;
import org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.util.TestDataBuilder;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
import org.apache.druid.client.BatchServerInventoryView;
import org.apache.druid.client.BrokerSegmentWatcherConfig;
import org.apache.druid.client.BrokerServerView;
import org.apache.druid.client.DirectDruidClientFactory;
import org.apache.druid.client.DruidServer;
import org.apache.druid.client.selector.HighestPriorityTierSelectorStrategy;
import org.apache.druid.client.selector.RandomServerSelectorStrategy;
Expand Down Expand Up @@ -295,11 +296,16 @@ public CallbackAction segmentViewInitialized()
}
};

brokerServerView = new BrokerServerView(
DirectDruidClientFactory druidClientFactory = new DirectDruidClientFactory(
new NoopServiceEmitter(),
EasyMock.createMock(QueryToolChestWarehouse.class),
EasyMock.createMock(QueryWatcher.class),
getSmileMapper(),
EasyMock.createMock(HttpClient.class),
EasyMock.createMock(HttpClient.class)
);

brokerServerView = new BrokerServerView(
druidClientFactory,
baseView,
new HighestPriorityTierSelectorStrategy(new RandomServerSelectorStrategy()),
new NoopServiceEmitter(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,10 @@
import org.apache.druid.segment.join.JoinableFactoryWrapper;
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
import org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.util.TestDataBuilder;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,10 @@
import org.apache.druid.segment.join.JoinableFactoryWrapper;
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
import org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.util.TestDataBuilder;
import org.apache.druid.sql.guice.SqlModule;
import org.apache.druid.timeline.DataSegment;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,10 @@
import org.apache.druid.segment.join.JoinableFactoryWrapper;
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
import org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.util.TestDataBuilder;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,10 @@
import org.apache.druid.segment.join.JoinableFactoryWrapper;
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
import org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.util.TestDataBuilder;
import org.apache.druid.sql.guice.SqlModule;
import org.apache.druid.timeline.DataSegment;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,10 @@
import org.apache.druid.segment.join.JoinableFactoryWrapper;
import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
import org.apache.druid.segment.writeout.OffHeapMemorySegmentWriteOutMediumFactory;
import org.apache.druid.server.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.BaseCalciteQueryTest;
import org.apache.druid.sql.calcite.filtration.Filtration;
import org.apache.druid.sql.calcite.util.CalciteTests;
import org.apache.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import org.apache.druid.sql.calcite.util.TestDataBuilder;
import org.apache.druid.timeline.DataSegment;
import org.apache.druid.timeline.partition.LinearShardSpec;
Expand Down
Loading
Loading