Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(sink): reimplement sink coordinator worker and support scale #18467

Merged
merged 2 commits into from
Sep 11, 2024

Conversation

wenym1
Copy link
Contributor

@wenym1 wenym1 commented Sep 9, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Our sink coordinator worker determine whether all sink metadata of an epoch are collected by checking if all vnodes are collected. The coordinator worker handles epochs one by one. When receiving a commit request from one parallelism, it will blocking await on the commit request from all other parallelisms, and then commit all, and then wait for the next epoch. In the current implementation there won't be concurrent epoch.

The current implementation works in normal case, but doesn't work in the case of scale out, because when scale out happens, the newly created parallelism may send a commit request of a later epoch before the existing parallelisms finish a previous epoch, and there may exist more than one epoch in this case. The current implementation will report an error when such case happens, and the scale out is actually achieved by triggering a log store rewind with such error.

In this PR, we reimplement the sink coordinator worker and support concurrent epochs by introducing an epoch queue. We add variant update_vnode_bitmap and stop in the sink coordination request so that each sink parallelism can send the relevant information to the coordinator worker.

We also change to force a commit on update vnode bitmap for sinks that decouple commit with checkpoint barrier.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

Copy link
Collaborator

@hzxa21 hzxa21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

($tx:expr, $msg:expr) => {
if $tx.send($msg).await.is_err() {
error!("unable to send msg");
async fn run_future_with_periodic_fn<F: Future>(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not related to this PR: this method is very useful for debugging stuckness issue. Random thought: can we use it in sink writer as well given that we have seen sink stuck occasionally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On sink writer side, the future can be better instrumented with await-tree inside actors.

@wenym1 wenym1 added this pull request to the merge queue Sep 11, 2024
Merged via the queue into main with commit 7833509 Sep 11, 2024
38 of 39 checks passed
@wenym1 wenym1 deleted the yiming/sink-coordinator-scale branch September 11, 2024 03:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants