Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: cache smt every 1000 blocks #6732

Merged
merged 3 commits into from
Jan 9, 2025

Conversation

SWvheerden
Copy link
Collaborator

Description

Caches the smt every 1000 blocks by default or half the prune interval.

Motivation and Context

Loading the smt can take almost 2 mins on nextnet atm. This brings down he loading to 3 secs.

How Has This Been Tested?

Manual tests on nextnet

@SWvheerden SWvheerden requested a review from a team as a code owner January 7, 2025 12:49
@SWvheerden SWvheerden changed the title feat: cache smt every x blocks feat: cache smt every 1000 blocks Jan 7, 2025
Copy link

github-actions bot commented Jan 7, 2025

Test Results (CI)

    3 files    129 suites   36m 33s ⏱️
1 349 tests 1 349 ✅ 0 💤 0 ❌
4 045 runs  4 045 ✅ 0 💤 0 ❌

Results for commit 2e64414.

♻️ This comment has been updated with latest results.

Copy link

github-actions bot commented Jan 7, 2025

Test Results (Integration tests)

36 tests  +36   36 ✅ +36   17m 58s ⏱️ + 17m 58s
11 suites +11    0 💤 ± 0 
 2 files   + 2    0 ❌ ± 0 

Results for commit 2e64414. ± Comparison against base commit 2e3f039.

♻️ This comment has been updated with latest results.

base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
base_layer/core/src/chain_storage/blockchain_database.rs Outdated Show resolved Hide resolved
@@ -1341,6 +1384,10 @@ impl LMDBDatabase {
header.height,
&BlockAccumulatedData::new(kernel_mmr.get_pruned_hash_set()?, total_kernel_sum),
)?;
allow_smt_change.store(false, Ordering::SeqCst);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When does this get reset?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it does not,
This is to ensure it only ever runs once on smt per transction

Comment on lines +1215 to +1217
allow_smt_change: Arc<AtomicBool>,
) -> Result<(), ChainStorageError> {
let can_we_change_smt = allow_smt_change.load(Ordering::SeqCst);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a lot happening in this function outside of whether we can change the SMT or not, and I don't think it matters for most of the function, as we don't end up updating the SMT until the end.

Is this something that we could not do in this function, and somehow test at the caller after this returns, to see fi we also need to call insert_smt ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a weird design detail, and I think the least worse one there is.
Main issue we have is we have the DB which is the source of truth, now we have the memory cache of the SMT because loading it takes too long. Now we want to create a File copy of the cache as calculating it takes too long.

So the DB needs to stay the source of truth and always needs to be correct and in sync.

But all other patterns require multiple rwlock's which might lead to an inconsistent DB state, and or requires manual dev implementation to dave the SMT

brianp
brianp previously approved these changes Jan 9, 2025
@SWvheerden SWvheerden merged commit d3f3502 into tari-project:development Jan 9, 2025
17 checks passed
@SWvheerden SWvheerden deleted the sw_cache_smt branch January 9, 2025 11:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants