Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka message size options #383

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
777f2de
v0.9.6.1 release
david-leifker Jan 19, 2023
5e490de
fix(release version): fix missing global release bump to 0.9.6.1
david-leifker Jan 20, 2023
4b55c8b
Merge remote-tracking branch 'upstream/master'
david-leifker Jan 25, 2023
da38639
bump versions
david-leifker Jan 25, 2023
87d4cab
Merge remote-tracking branch 'upstream/master'
david-leifker Feb 8, 2023
5a0dd4d
Merge remote-tracking branch 'upstream/master'
david-leifker Mar 3, 2023
6a37353
Merge remote-tracking branch 'upstream/master'
david-leifker Mar 9, 2023
b1ee45f
Merge remote-tracking branch 'upstream/master'
david-leifker May 8, 2023
dad8518
Merge remote-tracking branch 'upstream/master'
david-leifker Jun 17, 2023
de72e07
Merge remote-tracking branch 'upstream/master'
david-leifker Aug 15, 2023
fcde3cf
feat(kafka): enable kafka message size options
david-leifker Oct 18, 2023
ea9fad3
Merge remote-tracking branch 'upstream/master' into kafka-message-siz…
david-leifker Oct 18, 2023
b8dcfe1
bump version
david-leifker Oct 18, 2023
9e9579f
Merge branch 'master' into kafka-message-size-options
david-leifker Oct 21, 2023
69c2337
add parameters ffor kafka-setup datahub-upgrade
david-leifker Oct 30, 2023
a1a6899
Merge branch 'acryldata:kafka-message-size-options' into kafka-messag…
david-leifker Oct 30, 2023
d2b48b1
bump chart
david-leifker Oct 30, 2023
4fb03de
Merge branch 'master' into kafka-message-size-options
david-leifker Nov 3, 2023
c510609
default to none until image updates support snappy
david-leifker Nov 3, 2023
112ea11
update kafka broker setting for message size
david-leifker Nov 3, 2023
35d0fa0
Update Chart.yaml
david-leifker Nov 3, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions charts/datahub/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ description: A Helm chart for LinkedIn DataHub
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.3.5
version: 0.3.6
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 0.11.0
dependencies:
- name: datahub-gms
version: 0.2.153
version: 0.2.154
repository: file://./subcharts/datahub-gms
condition: datahub-gms.enabled
- name: datahub-frontend
version: 0.2.142
version: 0.2.143
repository: file://./subcharts/datahub-frontend
condition: datahub-frontend.enabled
- name: datahub-mae-consumer
version: 0.2.147
version: 0.2.148
repository: file://./subcharts/datahub-mae-consumer
condition: global.datahub_standalone_consumers_enabled
- name: datahub-mce-consumer
version: 0.2.150
version: 0.2.151
repository: file://./subcharts/datahub-mce-consumer
condition: global.datahub_standalone_consumers_enabled
- name: datahub-ingestion-cron
Expand Down
2 changes: 1 addition & 1 deletion charts/datahub/subcharts/datahub-frontend/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ description: A Helm chart for Kubernetes
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.2.142
version: 0.2.143
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: v0.11.0
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,18 @@ spec:
value: "{{ .Values.global.datahub_analytics_enabled }}"
- name: KAFKA_BOOTSTRAP_SERVER
value: "{{ .Values.global.kafka.bootstrap.server }}"
{{- with .Values.global.kafka.producer.compressionType }}
- name: KAFKA_PRODUCER_COMPRESSION_TYPE
value: "{{ . }}"
{{- end }}
{{- with .Values.global.kafka.producer.maxRequestSize }}
- name: KAFKA_PRODUCER_MAX_REQUEST_SIZE
value: {{ . | quote }}
{{- end }}
{{- with .Values.global.kafka.consumer.maxPartitionFetchBytes }}
- name: KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES
value: {{ . | quote }}
{{- end }}
{{- if .Values.global.springKafkaConfigurationOverrides }}
{{- range $configName, $configValue := .Values.global.springKafkaConfigurationOverrides }}
- name: KAFKA_PROPERTIES_{{ $configName | replace "." "_" | upper }}
Expand Down
2 changes: 1 addition & 1 deletion charts/datahub/subcharts/datahub-gms/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ description: A Helm chart for LinkedIn DataHub's datahub-gms component
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.2.153
version: 0.2.154
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: v0.11.0
12 changes: 12 additions & 0 deletions charts/datahub/subcharts/datahub-gms/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,18 @@ spec:
value: "{{ .Values.global.sql.datasource.driver }}"
- name: KAFKA_BOOTSTRAP_SERVER
value: "{{ .Values.global.kafka.bootstrap.server }}"
{{- with .Values.global.kafka.producer.compressionType }}
- name: KAFKA_PRODUCER_COMPRESSION_TYPE
value: "{{ . }}"
{{- end }}
{{- with .Values.global.kafka.producer.maxRequestSize }}
- name: KAFKA_PRODUCER_MAX_REQUEST_SIZE
value: {{ . | quote }}
{{- end }}
{{- with .Values.global.kafka.consumer.maxPartitionFetchBytes }}
- name: KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES
value: {{ . | quote }}
{{- end }}
{{- if eq .Values.global.kafka.schemaregistry.type "INTERNAL" }}
- name: KAFKA_SCHEMAREGISTRY_URL
value: {{ printf "http://localhost:%s/schema-registry/api/" .Values.global.datahub.gms.port }}
Expand Down
6 changes: 6 additions & 0 deletions charts/datahub/subcharts/datahub-gms/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,12 @@ global:
server: "broker:9092"
schemaregistry:
url: "http://schema-registry:8081"
## Kafka producer and consumer settings
#producer:
# compressionType: snappy
# maxRequestSize: "5242880"
#consumer:
# maxPartitionFetchBytes: "5242880"

neo4j:
host: "neo4j:7474"
Expand Down
2 changes: 1 addition & 1 deletion charts/datahub/subcharts/datahub-mae-consumer/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ description: A Helm chart for Kubernetes
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.2.147
version: 0.2.148
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: v0.11.0
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,18 @@ spec:
value: "{{ .Values.global.datahub.gms.port }}"
- name: KAFKA_BOOTSTRAP_SERVER
value: "{{ .Values.global.kafka.bootstrap.server }}"
{{- with .Values.global.kafka.producer.compressionType }}
- name: KAFKA_PRODUCER_COMPRESSION_TYPE
value: "{{ . }}"
{{- end }}
{{- with .Values.global.kafka.producer.maxRequestSize }}
- name: KAFKA_PRODUCER_MAX_REQUEST_SIZE
value: {{ . | quote }}
{{- end }}
{{- with .Values.global.kafka.consumer.maxPartitionFetchBytes }}
- name: KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES
value: {{ . | quote }}
{{- end }}
{{- if eq .Values.global.kafka.schemaregistry.type "INTERNAL" }}
- name: KAFKA_SCHEMAREGISTRY_URL
value: {{ printf "http://%s-%s:%s/schema-registry/api/" .Release.Name "datahub-gms" .Values.global.datahub.gms.port }}
Expand Down
6 changes: 6 additions & 0 deletions charts/datahub/subcharts/datahub-mae-consumer/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,12 @@ global:
server: "broker:9092"
schemaregistry:
url: "http://schema-registry:8081"
## Kafka producer and consumer settings
#producer:
# compressionType: snappy
# maxRequestSize: 5242880
#consumer:
# maxPartitionFetchBytes: 5242880

neo4j:
host: "neo4j:7474"
Expand Down
2 changes: 1 addition & 1 deletion charts/datahub/subcharts/datahub-mce-consumer/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ description: A Helm chart for Kubernetes
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.2.150
version: 0.2.151
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: v0.11.0
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,18 @@ spec:
value: "true"
- name: KAFKA_BOOTSTRAP_SERVER
value: "{{ .Values.global.kafka.bootstrap.server }}"
{{- with .Values.global.kafka.producer.compressionType }}
- name: KAFKA_PRODUCER_COMPRESSION_TYPE
value: "{{ . }}"
{{- end }}
{{- with .Values.global.kafka.producer.maxRequestSize }}
- name: KAFKA_PRODUCER_MAX_REQUEST_SIZE
value: {{ . | quote }}
{{- end }}
{{- with .Values.global.kafka.consumer.maxPartitionFetchBytes }}
- name: KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES
value: {{ . | quote }}
{{- end }}
{{- if eq .Values.global.kafka.schemaregistry.type "INTERNAL" }}
- name: KAFKA_SCHEMAREGISTRY_URL
value: {{ printf "http://%s-%s:%s/schema-registry/api/" .Release.Name "datahub-gms" .Values.global.datahub.gms.port }}
Expand Down
6 changes: 6 additions & 0 deletions charts/datahub/subcharts/datahub-mce-consumer/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,12 @@ global:
server: "broker:9092"
schemaregistry:
url: "http://schema-registry:8081"
# Kafka producer and consumer settings
#producer:
# compressionType: snappy
# maxRequestSize: "5242880"
#consumer:
# maxPartitionFetchBytes: "5242880"

datahub:
version: head
Expand Down
12 changes: 12 additions & 0 deletions charts/datahub/templates/datahub-upgrade/_upgrade.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,18 @@ Return the env variables for upgrade jobs
value: "{{ .Values.global.sql.datasource.driver }}"
- name: KAFKA_BOOTSTRAP_SERVER
value: "{{ .Values.global.kafka.bootstrap.server }}"
{{- with .Values.global.kafka.producer.compressionType }}
- name: KAFKA_PRODUCER_COMPRESSION_TYPE
value: "{{ . }}"
{{- end }}
{{- with .Values.global.kafka.producer.maxRequestSize }}
- name: KAFKA_PRODUCER_MAX_REQUEST_SIZE
value: {{ . | quote }}
{{- end }}
{{- with .Values.global.kafka.consumer.maxPartitionFetchBytes }}
- name: KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES
value: {{ . | quote }}
{{- end }}
{{- if eq .Values.global.kafka.schemaregistry.type "INTERNAL" }}
- name: KAFKA_SCHEMAREGISTRY_URL
value: {{ printf "http://%s-%s:%s/schema-registry/api/" .Release.Name "datahub-gms" .Values.global.datahub.gms.port }}
Expand Down
4 changes: 4 additions & 0 deletions charts/datahub/templates/kafka-setup-job.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,10 @@ spec:
value: {{ .Values.global.kafka.zookeeper.server | quote }}
- name: KAFKA_BOOTSTRAP_SERVER
value: {{ .Values.global.kafka.bootstrap.server | quote }}
{{- with .Values.global.kafka.maxMessageBytes }}
- name: MAX_MESSAGE_BYTES
value: {{ . | quote }}
{{- end }}
{{- if eq .Values.global.kafka.schemaregistry.type "INTERNAL" }}
- name: USE_CONFLUENT_SCHEMA_REGISTRY
value: "false"
Expand Down
6 changes: 6 additions & 0 deletions charts/datahub/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -499,6 +499,12 @@ global:
metadata_change_log_timeseries_topic_name: "MetadataChangeLog_Timeseries_v1"
platform_event_topic_name: "PlatformEvent_v1"
datahub_upgrade_history_topic_name: "DataHubUpgradeHistory_v1"
maxMessageBytes: "5242880" # 5MB
producer:
compressionType: none
maxRequestSize: "5242880" # 5MB
consumer:
maxPartitionFetchBytes: "5242880" # 5MB
## For AWS MSK set this to a number larger than 1
# partitions: 3
# replicationFactor: 3
Expand Down
2 changes: 1 addition & 1 deletion charts/prerequisites/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: A Helm chart for packages that Datahub depends on
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.1.3
version: 0.1.4
dependencies:
- name: elasticsearch
version: 7.17.3
Expand Down
7 changes: 4 additions & 3 deletions charts/prerequisites/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,16 @@ elasticsearch:
clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"

# # Shrink default JVM heap.
esJavaOpts: "-Xmx384m -Xms384m"
esJavaOpts: "-Xmx512m -Xms512m"

# # Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "768M"
memory: "1024M"
limits:
cpu: "1000m"
memory: "768M"
memory: "1024M"

# # Request smaller persistent volumes.
# volumeClaimTemplate:
Expand Down Expand Up @@ -131,6 +131,7 @@ cp-helm-charts:
# Bitnami version of Kafka that deploys open source Kafka https://artifacthub.io/packages/helm/bitnami/kafka
kafka:
enabled: true
maxMessageBytes: "5242880"
kraft:
enabled: false
zookeeper:
Expand Down
Loading