-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(backfill): do not require at least 1 record to be read per epoch, if rate limit enabled #16744
Conversation
I didn't get it. The motivation of requiring at least 1 record to be read per epoch is to prevent the livelock of scanning from storage #12780
The problem exists regardless of whether rate limit was enabled or not. They are unrelated. Don't understand why associating them together. |
Let me guess. Suppose you met a case that even Fixing a new problem while creating an known problem sounds bad to me. |
This PR does not lead to any regression. Consider the case when we don't read at least 1 record. This is when rate limiter tells us that we have hit the snapshot read rate limit, and should not read a new record. That's totally fine, because subsequently, when rate limit capacity frees up, we can apply this requirement of 1 record per epoch is unacceptable in cases where the latency of processing a single record is high. If we apply rate limit to this scenario, it still can't change the distribution of the stream.
The provided test case will take an extremely long time without the fix in this PR. For sure we can have this: |
4bffda5
to
815db14
Compare
Does 1.9.0 need to war for this or included in minor version is also ok? |
Minor version is ok |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. IIUC, the rate limiter would always be checked before the at least 1 row per barrier read
, so only when rate_limit = 0
will stop the at least 1 row per barrier read
. I think it is acceptable, because rate_limit = 0
means we want to throttle the streaming DAG entirely.
…, if rate limit enabled (#16744)
…, if rate limit enabled (#16744) (#16769) Co-authored-by: Noel Kwan <[email protected]>
I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
We have a requirement to read at least 1 record per barrier from snapshot for backfill.
This means we are forced to have the following distribution in our stream:
Which means the barrier latency is tied to processing times for the records.
In cases where the processing time is long, e.g. UDF with high latencies, the barrier latency will spike as well.
In the associated test case, we show that when UDF calls have 5s latency, the test should take a long time to complete.
This occurs even when rate limit set, disrupting its usefulness.
The solution is to check if rate limit can currently read a record, before applying the
read at least 1 record per epoch
step.If it can't, just skip over. Eventually we will get to read a record.
If it can, then continue to apply this step.
Checklist
./risedev check
(or alias,./risedev c
)Documentation
Release note
If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.