-
Notifications
You must be signed in to change notification settings - Fork 779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable parallel prefetching of nodes within the same storage trie #425
base: optimism
Are you sure you want to change the base?
Enable parallel prefetching of nodes within the same storage trie #425
Conversation
I haven't had a chance to look at the code yet, but it is worth noting that the |
#370 added a worker group with two implementations - one that doesn't create any go routines. And I'd forgotten again but actually we want to avoid concurrency when running in native mode as well because the PreimageOracle communication isn't thread safe so #373 switched to using an explicit flag for single threaded mode instead of doing it automatically when there is one CPU. |
@ajsutton while this PR adds more concurrency capabilities to the prefetcher, it already strictly depends on goroutines (https://github.com/ethereum-optimism/op-geth/blob/optimism/core/state/trie_prefetcher.go#L270). I don't expect any of the prefetcher code to actually be invoked by |
Awesome thanks. I just wanted to jump in early and flag the risk since I won't have time to look at this for a while and likely (hopefully) someone else will get to it first. |
Results from this benchmarking test:
|
Description
This extends the statedb prefetcher to support parallel fetches within the context of a single trie. The purpose of the statedb prefetcher is only to prewarm database caches, ensuring that all trie nodes are quickly accessible when computing the MPT state root.
Without this change, the statedb's prefetcher will perform concurrent fetching jobs for each unique trie, but perform all fetches for each trie sequentially. In certain cases, such as blocks which contain a large number of storage updates in a single large trie, this sequential prefetching behavior can result in a significant performance degradation. In extreme cases, blocks may contain updates to thousands of storage slots in a single trie, which are then fetched entirely sequentially.
This implementation utilizes a fixed worker pool per prefetcher, and allows the per-trie subfetcher to clone this trie up to N times for a given max concurrency limit of N goroutines. This approach was chosen because the trie itself is not safe for concurrent use, and the cost of copying the trie is negligible when compared to the round-trip latency of fetching the associated trie nodes from the local database.
This selects a somewhat-arbitrary 16 goroutines as the default concurrency limit for the prefetcher based on some locally-run benchmarking results on a M1 Pro with 16GB of memory, using a matrix of
trieSize
(1k, 100k, 10M),keyCount
(10, 100, 1k, 10k), andmaxConcurrency
(1, 4, 16, 64) values. We see that there is a minor increase in overhead associated with concurrent access when accessing tries containing fewer than 1000 nodes, but around a 10x reduction in latency when accessing at least 10 keys from very large tries (10M keys). This is likely a worthwhile tradeoff for high-throughput EVM chains, as this improvement directly targets a subset of the worst-performing blocks.Tests
TODO: include links to benchmarking code.
Additional context
Metadata
See ethereum/go-ethereum#28266 for additional context and prior discussions.