Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka: Emit production rate #17491

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 7 additions & 6 deletions docs/operations/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,12 +206,13 @@ field in the `context` field of the ingestion spec. `tags` is expected to be a m

These metrics apply to the [Kafka indexing service](../ingestion/kafka-ingestion.md).

|Metric|Description|Dimensions|Normal value|
|------|-----------|----------|------------|
|`ingest/kafka/lag`|Total lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.|`dataSource`, `stream`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/maxLag`|Max lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.|`dataSource`, `stream`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/avgLag`|Average lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute.|`dataSource`, `stream`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/partitionLag`|Partition-wise lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers. Minimum emission period for this metric is a minute.|`dataSource`, `stream`, `partition`, `tags`|Greater than 0, should not be a very high number. |
|Metric| Description |Dimensions|Normal value|
|------|----------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------|
|`ingest/kafka/lag`| Total lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute. |`dataSource`, `stream`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/maxLag`| Max lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute. |`dataSource`, `stream`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/avgLag`| Average lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers across all partitions. Minimum emission period for this metric is a minute. |`dataSource`, `stream`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/partitionLag`| Partition-wise lag between the offsets consumed by the Kafka indexing tasks and latest offsets in Kafka brokers. Minimum emission period for this metric is a minute. |`dataSource`, `stream`, `partition`, `tags`|Greater than 0, should not be a very high number. |
|`ingest/kafka/partitionProduction`| Partition-wise difference between the latest offsets in Kafka brokers since the previous collection. Minimum emission period for this metric is a minute.|`dataSource`, `stream`, `partition`, `tags`|Greater than 0. |

### Ingestion metrics for Kinesis

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@
import org.apache.druid.segment.incremental.RowIngestionMetersFactory;
import org.joda.time.DateTime;

import javax.annotation.Nonnull;
import javax.annotation.Nullable;
import java.util.ArrayList;
import java.util.Collections;
Expand Down Expand Up @@ -96,6 +97,7 @@ public class KafkaSupervisor extends SeekableStreamSupervisor<KafkaTopicPartitio
private final ServiceEmitter emitter;
private final DruidMonitorSchedulerConfig monitorSchedulerConfig;
private final Pattern pattern;
private Map<KafkaTopicPartition, Long> previousLatestSequenceFromStream;
private volatile Map<KafkaTopicPartition, Long> latestSequenceFromStream;


Expand Down Expand Up @@ -277,6 +279,29 @@ protected Map<KafkaTopicPartition, Long> getPartitionRecordLag()
return getRecordLagPerPartitionInLatestSequences(highestCurrentOffsets);
}

@Nullable
@Override
@SuppressWarnings("SSBasedInspection")
protected Map<KafkaTopicPartition, Long> getPartitionProductionRate()
{
Map<KafkaTopicPartition, Long> diff = calculateDiff(
latestSequenceFromStream,
previousLatestSequenceFromStream
);

previousLatestSequenceFromStream = latestSequenceFromStream
.entrySet()
.stream()
.collect(
Collectors.toMap(
Entry::getKey,
Entry::getValue
)
);

return diff;
}

@Nullable
@Override
protected Map<KafkaTopicPartition, Long> getPartitionTimeLag()
Expand Down Expand Up @@ -524,4 +549,23 @@ private KafkaTopicPartition getMatchingKafkaTopicPartition(

return match ? new KafkaTopicPartition(isMultiTopic(), streamMatchValue, kafkaTopicPartition.partition()) : null;
}

@SuppressWarnings("SSBasedInspection")
private Map<KafkaTopicPartition, Long> calculateDiff(
@Nonnull Map<KafkaTopicPartition, Long> left,
@Nonnull Map<KafkaTopicPartition, Long> right
)
{
return left
.entrySet()
.stream()
.collect(
Collectors.toMap(
Entry::getKey,
e -> e.getValue() != null
? e.getValue() - Optional.ofNullable(right.get(e.getKey())).orElse(0L)
: 0
)
);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -4126,6 +4126,12 @@ private void updateCurrentOffsets() throws InterruptedException, ExecutionExcept
@Nullable
protected abstract Map<PartitionIdType, Long> getPartitionTimeLag();

@Nullable
protected Map<PartitionIdType, Long> getPartitionProductionRate()
{
return null;
}

/**
* Gets highest current offsets of all the tasks (actively reading and publishing) for all partitions of the stream.
* In case if no task is reading for a partition, returns offset stored in metadata storage for that partition.
Expand Down Expand Up @@ -4509,6 +4515,7 @@ protected void emitLag()
try {
Map<PartitionIdType, Long> partitionRecordLags = getPartitionRecordLag();
Map<PartitionIdType, Long> partitionTimeLags = getPartitionTimeLag();
Map<PartitionIdType, Long> partitionProductionRate = getPartitionProductionRate();

if (partitionRecordLags == null && partitionTimeLags == null) {
throw new ISE("Latest offsets have not been fetched");
Expand Down Expand Up @@ -4573,9 +4580,42 @@ protected void emitLag()
);
};

BiConsumer<Map<PartitionIdType, Long>, String> productionEmitFn = (productionRates, suffix) -> {
if (productionRates == null) {
return;
}

Map<String, Object> metricTags = spec.getContextValue(DruidMetrics.TAGS);
for (Map.Entry<PartitionIdType, Long> entry : productionRates.entrySet()) {
emitter.emit(
ServiceMetricEvent.builder()
.setDimension(DruidMetrics.DATASOURCE, dataSource)
.setDimension(DruidMetrics.STREAM, getIoConfig().getStream())
.setDimension(DruidMetrics.PARTITION, entry.getKey())
.setDimensionIfNotNull(DruidMetrics.TAGS, metricTags)
.setMetric(
StringUtils.format("ingest/%s/partitionProduction%s", type, suffix),
entry.getValue()
)
);
}
emitter.emit(
ServiceMetricEvent.builder()
.setDimension(DruidMetrics.DATASOURCE, dataSource)
.setDimension(DruidMetrics.STREAM, getIoConfig().getStream())
.setDimensionIfNotNull(DruidMetrics.TAGS, metricTags)
.setMetric(
StringUtils.format("ingest/%s/production%s", type, suffix),
productionRates.values().stream().mapToLong(e -> e).sum()
)
);
};

// this should probably really be /count or /records or something.. but keeping like this for backwards compat
emitFn.accept(partitionRecordLags, "");
emitFn.accept(partitionTimeLags, "/time");

productionEmitFn.accept(partitionProductionRate, "");
}
catch (Exception e) {
log.warn(e, "Unable to compute lag");
Expand Down
Loading