Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bootstrap tuning #4806

Merged
merged 4 commits into from
Dec 10, 2024
Merged

Conversation

pwojcikdev
Copy link
Contributor

This fixes a few more problems with bootstrap that I noticed when working on traffic shaping.

@gr0vity-dev-bot
Copy link

gr0vity-dev-bot commented Dec 6, 2024

Test Results for Commit 6924d54

Pull Request 4806: Results
Overall Status:

Test Case Results

  • 5n4pr_conf_10k_bintree: PASS (Duration: 115s)
  • 5n4pr_conf_10k_change: PASS (Duration: 204s)
  • 5n4pr_conf_change_dependant: PASS (Duration: 133s)
  • 5n4pr_conf_change_independant: PASS (Duration: 139s)
  • 5n4pr_conf_send_dependant: PASS (Duration: 124s)
  • 5n4pr_conf_send_independant: PASS (Duration: 129s)
  • 5n4pr_rocks_10k_bintree: PASS (Duration: 110s)
  • 5n4pr_rocks_10k_change: PASS (Duration: 218s)

Last updated: 2024-12-10 13:42:21 UTC

Comment on lines 579 to 585
// Do not throttle accounts that are probably fully synced
if (sent && fails == 0)
{
nano::lock_guard<nano::mutex> lock{ mutex };
accounts.timestamp_set (account);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By setting the timestamp to current value ( accounts.timestamp_set (account); ) a cooldown period in next_priority () is applied to this account. Is that the desired behavior ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should work as intended. It might be counterintuitive because we want to throttle only accounts with potentially more blocks; it's to avoid requesting blocks from the same frontier over and over before the current batch is processed by block processor.

The flow is something like:

if (likely to have more blocks)
{
set cooldown timestamp
}

// ... later in the block processing logic

if (last received block processed)
{
reset cooldown timestamp
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, that makes sense.
I got confused by the comment
// Do not throttle accounts that are probably fully synced

So if I understand correctly,the goal is to rapidly deprioritise fully synced accounts by requesting again without throttle.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I updated the comment to make the intention clear

@@ -204,8 +206,24 @@ bool nano::bootstrap_service::send (std::shared_ptr<nano::transport::channel> co

// TODO: There is no feedback mechanism if bandwidth limiter starts dropping our requests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this now handled by the added callback ? Or is the TODO about something else ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This todo is now out of date

@pwojcikdev pwojcikdev merged commit ded11cf into nanocurrency:develop Dec 10, 2024
28 checks passed
@qwahzi qwahzi added this to the V28 milestone Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress / V28.0
Development

Successfully merging this pull request may close these issues.

4 participants