-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(torii): limit number of blocks processed in one go #2505
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2505 +/- ##
=======================================
Coverage 67.73% 67.73%
=======================================
Files 388 388
Lines 50421 50427 +6
=======================================
+ Hits 34153 34158 +5
- Misses 16268 16269 +1 ☔ View full report in Codecov by Sentry. |
b17d0fd
to
17fc03d
Compare
17fc03d
to
67dc049
Compare
WalkthroughOhayo, sensei! This pull request introduces a new command-line argument Changes
Possibly related PRs
Suggested reviewers
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
bin/torii/src/main.rs (2)
117-119
: Ohayo, sensei! New argument looks good, but let's add some spice to the docs!The new
blocks_chunk_size
argument is a great addition! It allows users to fine-tune the indexing process. However, to make it even more user-friendly, consider adding a brief explanation of its impact on performance and memory usage in the argument's help text.Here's a suggestion to enhance the documentation:
/// Number of blocks to process before commiting to DB -#[arg(long, default_value = "10240")] +#[arg(long, default_value = "10240", help = "Number of blocks to process before committing to DB. Higher values may improve performance but increase memory usage.")] blocks_chunk_size: u64,
246-246
: Ohayo again, sensei! The engine config update looks solid!The
blocks_chunk_size
is correctly added to theEngineConfig
initialization. This ensures that the engine respects the user's preference (or the default value) for block processing.To improve code readability, consider aligning the new field with the others:
EngineConfig { max_concurrent_tasks: args.max_concurrent_tasks, start_block: 0, - blocks_chunk_size: args.blocks_chunk_size, + blocks_chunk_size: args.blocks_chunk_size, events_chunk_size: args.events_chunk_size, index_pending: args.index_pending, polling_interval: Duration::from_millis(args.polling_interval), flags, },crates/torii/core/src/engine.rs (1)
291-291
: Ohayo, sensei! Let's tackle the TODO for parallelizing data fetching.Now that blocks are processed in chunks, implementing parallel data fetching could significantly improve performance.
Would you like assistance in implementing parallel fetching? I can help with code suggestions or open a GitHub issue to track this enhancement.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
- bin/torii/src/main.rs (2 hunks)
- crates/torii/core/src/engine.rs (3 hunks)
🧰 Additional context used
🔇 Additional comments (4)
bin/torii/src/main.rs (1)
246-246
: Ohayo once more, sensei! Let's make sure our engine is tuned up!The
blocks_chunk_size
configuration looks good, but we should verify that theEngine
implementation correctly utilizes this new parameter. This ensures that the chunked processing works as intended, maintaining data consistency and optimizing performance.Could you please run the following script to check the
Engine
implementation?This will help us confirm that the
Engine
is properly implementing the chunked processing based on the newblocks_chunk_size
parameter.✅ Verification successful
Ohayo once more, sensei! It looks like
engine.rs
wasn't found at the expected location. Let's broaden our search to locate the correct implementation.
Ohayo once more, sensei! The
blocks_chunk_size
parameter is correctly utilized within theEngine
implementation. Everything looks good!🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the Engine implementation correctly uses the blocks_chunk_size parameter # Search for blocks_chunk_size usage in the Engine implementation rg --type rust 'blocks_chunk_size' crates/torii-core/src/engine.rs # Look for any methods that might be using this parameter for chunked processing rg --type rust 'fn\s+process_blocks|fn\s+index_blocks' crates/torii-core/src/engine.rsLength of output: 299
Script:
#!/bin/bash # Description: Locate the Engine implementation and verify usage of blocks_chunk_size # Search for blocks_chunk_size usage across the entire codebase rg 'blocks_chunk_size' # Search for engine-related files in case the path has changed rg --files | rg 'engine\.rs'Length of output: 469
crates/torii/core/src/engine.rs (3)
134-134
: Ohayo, sensei! Great addition ofblocks_chunk_size
toEngineConfig
.Introducing
blocks_chunk_size
provides better control over block processing, enhancing the engine's flexibility and scalability.
146-146
: Setting defaultblocks_chunk_size
to 10240.The default value appears reasonable for standard workloads, ensuring efficient processing without overwhelming system resources.
303-303
:⚠️ Potential issuePotential off-by-one error when incrementing
from
.Incrementing
from
when it's not zero might skip processing the block atfrom
. Please verify if this behavior is intentional.Run the following script to inspect the block range calculations:
let from = cursors.head.unwrap_or(0); | ||
let total_remaining_blocks = latest_block_number - from; | ||
let blocks_to_process = total_remaining_blocks.min(self.config.blocks_chunk_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible underflow risk when calculating total_remaining_blocks
.
If from
is greater than latest_block_number
, subtracting may cause an underflow panic. Consider adding a check to handle this scenario gracefully.
Apply this change to prevent potential underflow:
let total_remaining_blocks = latest_block_number - from;
+if from > latest_block_number {
+ return Ok(FetchDataResult::None);
+}
Committable suggestion was skipped due to low confidence.
Summary by CodeRabbit
blocks_chunk_size
for configuring the number of blocks to process before committing to the database, enhancing user control over data processing.