Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(sink): add metrics to monitor sink back pressure #16593

Merged
merged 5 commits into from
May 7, 2024

Conversation

wenym1
Copy link
Contributor

@wenym1 wenym1 commented May 6, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Resolve #15473

Sink back-pressure is measured as total_wait_new_future_time / total_time, where wait_new_future_time means the time between the a next_item future returns ready and a next next_item future is created.

Metrics on grafana. Barrier interval is 1s. We manually inject a sleep in blackhole sink for 200ms, 500ms and 2000ms, and get sink back pressure as 20%, 50% and 100%, which is as expected.
image

The truncate method is unnecessary to be async, and to make the backpressure measurement easier, we make truncate method non-async by the way.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

Copy link

gitguardian bot commented May 6, 2024

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
9425213 Triggered Generic Password 8a36e84 e2e_test/source/cdc/cdc.validate.postgres.slt View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Copy link
Member

@BugenZhao BugenZhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's similar stuff for monitoring the back-pressure rate of dispatchers, where we simply keep a counter to track the total blocking time accumulated, then visualize the rate of it over time as the back-pressure rate.

Can we also follow that style, which looks simpler and more consistent?

let start_time = Instant::now();
dispatcher.dispatch_barrier(barrier.clone()).await?;
dispatcher
.actor_output_buffer_blocking_duration_ns
.inc_by(start_time.elapsed().as_nanos() as u64);
StreamResult::Ok(())

@wenym1
Copy link
Contributor Author

wenym1 commented May 7, 2024

There's similar stuff for monitoring the back-pressure rate of dispatchers, where we simply keep a counter to track the total blocking time accumulated, then visualize the rate of it over time as the back-pressure rate.

Can we also follow that style, which looks simpler and more consistent?

let start_time = Instant::now();
dispatcher.dispatch_barrier(barrier.clone()).await?;
dispatcher
.actor_output_buffer_blocking_duration_ns
.inc_by(start_time.elapsed().as_nanos() as u64);
StreamResult::Ok(())

It's different. The dispatcher is like monitoring backpressure from the sender side of a channel, but here we only have LogReader, which is similar to channel receiver rather than sender.

@BugenZhao
Copy link
Member

It's different. The dispatcher is like monitoring backpressure from the sender side of a channel, but here we only have LogReader, which is similar to channel receiver rather than sender.

You're right. I meant directly replace self.total_wait_new_future_micro_secs with a counter metric. Will this work?

@wenym1
Copy link
Contributor Author

wenym1 commented May 7, 2024

It's different. The dispatcher is like monitoring backpressure from the sender side of a channel, but here we only have LogReader, which is similar to channel receiver rather than sender.

You're right. I meant directly replace self.total_wait_new_future_micro_secs with a counter metric. Will this work?

Do you mean measure and report the two time separately, and reflect

It's different. The dispatcher is like monitoring backpressure from the sender side of a channel, but here we only have LogReader, which is similar to channel receiver rather than sender.

You're right. I meant directly replace self.total_wait_new_future_micro_secs with a counter metric. Will this work?

Oh, I get what you mean. Let me have a try.

@wenym1
Copy link
Contributor Author

wenym1 commented May 7, 2024

It's different. The dispatcher is like monitoring backpressure from the sender side of a channel, but here we only have LogReader, which is similar to channel receiver rather than sender.

You're right. I meant directly replace self.total_wait_new_future_micro_secs with a counter metric. Will this work?

Do you mean measure and report the two time separately, and reflect

It's different. The dispatcher is like monitoring backpressure from the sender side of a channel, but here we only have LogReader, which is similar to channel receiver rather than sender.

You're right. I meant directly replace self.total_wait_new_future_micro_secs with a counter metric. Will this work?

Oh, I get what you mean. Let me have a try.

It works! Though the curve is not as sharp as before, but looks good to me, given that it's much easier.

image

@wenym1 wenym1 requested a review from BugenZhao May 7, 2024 05:47
Copy link
Member

@BugenZhao BugenZhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wenym1 wenym1 enabled auto-merge May 7, 2024 06:03
@wenym1 wenym1 added this pull request to the merge queue May 7, 2024
Merged via the queue into main with commit 4030ddc May 7, 2024
20 of 32 checks passed
@wenym1 wenym1 deleted the yiming/sink-backpressure-metrics branch May 7, 2024 06:42
github-merge-queue bot pushed a commit that referenced this pull request May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add backpressure metrics in "Sink Metrics“ section
2 participants