-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support sink decoupling during backfill for CREATE SINK INTO TABLE #19285
Comments
The general idea LGTM. I think we need some more detailed design to ensure that the data in the log store can converge to 0 |
The work has overlap with unaligned join (log store executor can be used for both). Will write a design doc for this. |
Actually why just during backfill? Shouldn't |
Outside of the backfilling period, the downstream MV will wait for the upstream barrier to align, and there is no way to make the downstream progress faster. |
Why? If we use kv_log_store, it will just buffer the changes, and barrier can go pass once these changes have been written to the logstore. |
|
Is your feature request related to a problem? Please describe.
Backfilling can backpressure upstream, causing the existing streaming jobs to be slower or even stuck. There are three cases where backfilling can happen:
The current way to mitigate backfilling effect on upstream
SET BACKFILL_RATE_LIMIT to xxx
. Supported for 1, 2, 3.SET sink_decouple to true
(default on). Supported for 2.SET streaming_use_snapshot_backfill to true
(default off, experimental now). Supported for 1.The only effective way for 3 is use rate limit, which requires manual operation and understanding on the workload before determining a good value. Therefore, I think we should also support sink decoupling for sink into table as well. This is also a perquisite of doing severless backfill for sink into table.
Describe the solution you'd like
There are two ways to implement sink decoupling for sink into table:
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: