-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(torii-core): cleanups #2702
Conversation
WalkthroughOhayo, sensei! This pull request introduces modifications primarily to the handling of indexing flags within the Changes
Possibly related PRs
Suggested reviewers
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
crates/torii/core/src/engine.rs (2)
Line range hint
326-334
: Consider extracting the flag check into a method for better maintainability.The flag check could be moved into a descriptive method like
should_process_pending_blocks()
to improve code readability and maintainability.+impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> { + fn should_process_pending_blocks(&self) -> bool { + self.config.flags.contains(IndexingFlags::PENDING_BLOCKS) + } +} pub async fn fetch_data(&mut self, cursors: &Cursors) -> Result<FetchDataResult> { // ... - } else if self.config.flags.contains(IndexingFlags::PENDING_BLOCKS) { + } else if self.should_process_pending_blocks() { let data = self.fetch_pending(latest_block.clone(), cursors.last_pending_block_tx).await?;
Line range hint
326-334
: Enhance observability with structured logging.The engine handles complex async operations and error cases. Consider adding structured logging with additional context to help with debugging and monitoring:
- In
fetch_data
: Add metrics for the number of blocks processed- In
process_tasks
: Add timing information for task processing} else if self.config.flags.contains(IndexingFlags::PENDING_BLOCKS) { let data = self.fetch_pending(latest_block.clone(), cursors.last_pending_block_tx).await?; - debug!(target: LOG_TARGET, duration = ?instant.elapsed(), latest_block_number = %latest_block.block_number, "Fetched pending data."); + debug!( + target: LOG_TARGET, + duration = ?instant.elapsed(), + latest_block_number = %latest_block.block_number, + pending_blocks_enabled = true, + "Fetched pending data" + );async fn process_tasks(&mut self) -> Result<()> { + let start = Instant::now(); + let task_count = self.tasks.len(); let semaphore = Arc::new(Semaphore::new(self.config.max_concurrent_tasks)); + debug!( + target: LOG_TARGET, + task_count = task_count, + max_concurrent = self.config.max_concurrent_tasks, + "Starting task processing" + );Also applies to: 584-589
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (4)
bin/torii/src/main.rs
(1 hunks)crates/torii/core/src/engine.rs
(2 hunks)crates/torii/core/src/sql/mod.rs
(1 hunks)crates/torii/core/src/sql/query_queue.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- crates/torii/core/src/sql/query_queue.rs
🔇 Additional comments (3)
bin/torii/src/main.rs (1)
140-142
: Ohayo! Clean implementation of the PENDING_BLOCKS flag, sensei! 🎋
The addition of the PENDING_BLOCKS flag aligns well with the existing flag handling pattern and provides a cleaner approach compared to the previous boolean field.
Let's verify the flag's usage across the codebase:
✅ Verification successful
Ohayo! The PENDING_BLOCKS flag implementation looks solid, sensei! 🎋
The verification shows:
- Flag is properly defined in
engine.rs
- Correctly used in the engine's logic for pending block handling
- Clean integration in the main binary
Note: There are two pending TODOs in the codebase related to pending blocks:
- In
engine.rs
: Update unreachable code once Katana is updated - In
block.rs
: Integrate pending block with provider
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify PENDING_BLOCKS flag usage and removed index_pending references
# Check for PENDING_BLOCKS flag usage
rg "PENDING_BLOCKS" -A 3
# Ensure no lingering index_pending references
rg "index_pending"
# Check for any TODO/FIXME comments related to pending blocks
rg -i "todo.*pending|fixme.*pending"
Length of output: 1207
crates/torii/core/src/engine.rs (1)
139-139
: Ohayo, sensei! Nice addition of the PENDING_BLOCKS flag! 🎉
The new flag aligns well with the existing bitflags pattern and provides a more consistent way to control indexing behavior.
crates/torii/core/src/sql/mod.rs (1)
832-836
: Ohayo sensei! Clean refactoring of the enum options check.
The restructuring of the condition improves readability while maintaining the same logic. The code now more clearly expresses its intent to check if all enum options are empty tuples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (4)
crates/torii/core/src/processors/event_message.rs (1)
73-73
: Consider addressing the TODO comment about historical checksThe TODO comment mentions checking historical events and maintaining an internal counter. This might be important for the overall functionality of the event processing system.
Would you like me to help implement this feature or create a GitHub issue to track this task, sensei?
crates/torii/core/src/processors/mod.rs (1)
37-45
: Ohayo! Clean and efficient implementation, sensei! 🎋The new methods provide a clear and idiomatic way to handle event processing decisions:
is_historical
: Simple and efficient HashSet lookupshould_index
: Smart implementation where empty namespaces acts as a wildcard to index everythingThe implementation follows Rust best practices and provides good encapsulation of the decision logic.
Consider documenting the "empty means index all" behavior in a doc comment, as it's a significant implementation detail that other developers should be aware of:
+ /// Returns true if the namespace should be indexed. + /// An empty namespaces set means all namespaces should be indexed. pub fn should_index(&self, namespace: &str) -> bool { self.namespaces.is_empty() || self.namespaces.contains(namespace) }crates/torii/core/src/processors/register_model.rs (1)
63-65
: Ohayo! Clean encapsulation of indexing logic, sensei! 🎋The delegation of namespace indexing logic to
config.should_index()
improves code organization by:
- Centralizing indexing decisions in one place
- Reducing code duplication across processors
- Making the code more maintainable
This refactoring aligns well with the Single Responsibility Principle, as the processor no longer needs to know the details of how indexing decisions are made.
crates/torii/core/src/processors/register_event.rs (1)
63-63
: Ohayo! Clean refactoring of namespace indexing logic, sensei! ✨The encapsulation of namespace indexing logic into
config.should_index(&namespace)
improves code maintainability and follows the DRY principle. This change aligns well with similar refactoring in other processors.Consider adding a brief comment explaining the indexing logic, like:
+ // Skip events from namespaces that aren't configured for indexing if !config.should_index(&namespace) {
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (4)
crates/torii/core/src/processors/event_message.rs
(1 hunks)crates/torii/core/src/processors/mod.rs
(1 hunks)crates/torii/core/src/processors/register_event.rs
(1 hunks)crates/torii/core/src/processors/register_model.rs
(1 hunks)
🔇 Additional comments (2)
crates/torii/core/src/processors/event_message.rs (1)
76-77
: Ohayo! Clean refactoring of historical flag determination, sensei!
Moving the historical check logic to config.is_historical()
improves code organization and maintainability while preserving the existing functionality.
crates/torii/core/src/processors/mod.rs (1)
37-45
: Verify the integration with processor implementations
Let's ensure these new methods are being used consistently across all processors.
✅ Verification successful
Integration verified: methods are used correctly across processors
The verification shows proper integration:
is_historical
is correctly used in EventMessageProcessor for historical event trackingshould_index
is properly implemented in both RegisterEventProcessor and RegisterModelProcessor for namespace filtering- No direct field access to
historical_events
outside the implementation - Direct
namespaces
access is only for initialization and implementation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how these new methods are being used across processors
# Check for is_historical usage
echo "Checking is_historical usage:"
rg "is_historical" --type rust
# Check for should_index usage
echo "Checking should_index usage:"
rg "should_index" --type rust
# Look for any remaining direct access to these fields
echo "Checking for direct field access:"
rg "historical_events\." --type rust
rg "namespaces\." --type rust
Length of output: 3527
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
crates/torii/core/src/engine.rs (3)
Line range hint
316-324
: Consider using a more descriptive variable name.The condition is correct, but consider renaming
data
topending_data
orpending_block_data
to better reflect its content and improve code readability.- let data = + let pending_block_data = self.fetch_pending(latest_block.clone(), cursors.last_pending_block_tx).await?; - if let Some(data) = data { - FetchDataResult::Pending(data) + if let Some(pending_block_data) = pending_block_data { + FetchDataResult::Pending(pending_block_data)
Line range hint
590-607
: Enhance error context in task processing.When processing parallelized events, consider adding more context to the error message to help with debugging. Include the event ID and contract type in the error context.
if let Err(e) = processor .process(&world, &mut local_db, block_number, block_timestamp, &event_id, &event, &event_processor_config) .await { - error!(target: LOG_TARGET, event_name = processor.event_key(), error = %e, task_id = %task_id, "Processing parallelized event."); + error!( + target: LOG_TARGET, + event_name = processor.event_key(), + error = %e, + task_id = %task_id, + event_id = %event_id, + contract_type = ?contract_type, + "Failed to process parallelized event" + ); }
Line range hint
577-614
: Consider implementing a retry mechanism for failed tasks.Ohayo, sensei! The concurrent task processing is well-implemented, but consider adding a retry mechanism for failed tasks to improve resilience. This could help handle temporary failures in the processing pipeline.
You could implement this by:
- Collecting failed tasks in a separate queue
- Implementing an exponential backoff retry strategy
- Limiting the number of retries per task
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
bin/torii/src/main.rs
(1 hunks)crates/torii/core/src/engine.rs
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- bin/torii/src/main.rs
🔇 Additional comments (1)
crates/torii/core/src/engine.rs (1)
139-139
: LGTM! Clean flag addition.
The new PENDING_BLOCKS
flag is well-integrated into the existing bitflags system with a non-conflicting value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
crates/torii/core/src/utils.rs (1)
44-47
: Consider enhancing retry mechanism, sensei!While the error handling works, consider these improvements:
- Add logging for each retry attempt
- Implement exponential backoff instead of fixed 3-second delays
Here's a suggested implementation:
while retries > 0 { let response = client.cat(cid).map_ok(|chunk| chunk.to_vec()).try_concat().await; match response { Ok(stream) => return Ok(Bytes::from(stream)), Err(e) => { retries -= 1; + let backoff = Duration::from_secs(2_u64.pow((IPFS_CLIENT_MAX_RETRY - retries) as u32)); if retries > 0 { info!( error = %e, + remaining_retries = retries, + next_attempt_in_secs = ?backoff.as_secs(), "IPFS fetch retry" ); - tokio::time::sleep(Duration::from_secs(3)).await; + tokio::time::sleep(backoff).await; } } } }crates/torii/core/src/sql/erc.rs (1)
Line range hint
65-69
: Consider optimizing cache flush threshold, sensei!The current cache size threshold of 100,000 entries before flushing might:
- Consume significant memory
- Lead to large, blocking database operations
- Cause increased latency for some operations
Consider:
- Making the threshold configurable
- Using a time-based flush strategy alongside size-based
- Implementing incremental flushing to reduce operation size
Would you like me to propose a more detailed optimization strategy?
Also applies to: 132-136
crates/torii/core/src/executor/erc.rs (2)
Line range hint
52-86
: Heads up, sensei! Potential issue in ERC20 token_id assignment! (。•́︿•̀。)In the ERC20 branch, there seems to be a copy-paste error where
token_id
is assigned the same value ascontract_address
:let contract_address = id[1]; let token_id = id[1]; // This looks incorrectThis might cause issues with token identification. Consider if this should be
id[2]
since you're assertingid.len() == 3
.
Line range hint
259-263
: Technical debt alert, sensei! (╯°□°)╯︵ ┻━┻There's a HACK comment indicating a workaround for handling control characters in Loot Survivor NFT metadata. This should be tracked and properly addressed.
Would you like me to create a GitHub issue to track this technical debt and propose a more robust solution?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (14)
crates/torii/core/src/constants.rs
(1 hunks)crates/torii/core/src/engine.rs
(3 hunks)crates/torii/core/src/executor/erc.rs
(3 hunks)crates/torii/core/src/processors/metadata_update.rs
(3 hunks)crates/torii/core/src/sql/erc.rs
(3 hunks)crates/torii/core/src/sql/mod.rs
(2 hunks)crates/torii/core/src/sql/utils.rs
(2 hunks)crates/torii/core/src/utils.rs
(3 hunks)crates/torii/graphql/src/object/event.rs
(2 hunks)crates/torii/graphql/src/query/data.rs
(1 hunks)crates/torii/graphql/src/query/mod.rs
(2 hunks)crates/torii/grpc/src/server/subscriptions/entity.rs
(2 hunks)crates/torii/grpc/src/server/subscriptions/event.rs
(2 hunks)crates/torii/grpc/src/server/subscriptions/event_message.rs
(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- crates/torii/core/src/constants.rs
- crates/torii/graphql/src/query/data.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/torii/core/src/engine.rs
- crates/torii/core/src/sql/mod.rs
🔇 Additional comments (19)
crates/torii/core/src/utils.rs (1)
10-12
: Ohayo! Nice work on centralizing the constants, sensei!
Moving the IPFS configuration constants to a dedicated module improves maintainability and follows the DRY principle.
crates/torii/core/src/processors/metadata_update.rs (2)
15-15
: LGTM! Clean import addition.
Ohayo! The import of IPFS_CLIENT_MAX_RETRY
is properly placed and aligns with the codebase's organization.
110-110
: LGTM! Consistent usage of the new retry constant.
Ohayo sensei! The replacement of the retry constant is consistent across both IPFS fetch operations. The change maintains the existing retry logic while improving constant naming specificity.
Also applies to: 121-121
crates/torii/graphql/src/object/event.rs (2)
105-105
: Clean implementation of the delimiter change, sensei!
The update to use SQL_FELT_DELIMITER maintains the existing functionality while aligning with the new constant usage.
6-6
: Ohayo! The constant relocation looks good, sensei!
The move to use SQL_FELT_DELIMITER from the constants module improves code organization.
Let's verify the consistency of this change across the codebase:
✅ Verification successful
Ohayo! The SQL_FELT_DELIMITER migration is complete and consistent, sensei!
The codebase scan shows that:
SQL_FELT_DELIMITER
is properly defined incrates/torii/core/src/constants.rs
- All usages of the delimiter across the codebase consistently use
SQL_FELT_DELIMITER
- No legacy
FELT_DELIMITER
references remain
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that FELT_DELIMITER is completely replaced with SQL_FELT_DELIMITER
# Check for any remaining FELT_DELIMITER usage
rg "FELT_DELIMITER" --type rust
# Check for consistent SQL_FELT_DELIMITER usage
rg "SQL_FELT_DELIMITER" --type rust
Length of output: 5549
crates/torii/grpc/src/server/subscriptions/event.rs (2)
16-16
: Ohayo! Import change looks good, sensei!
The replacement of FELT_DELIMITER
with SQL_FELT_DELIMITER
is consistent with the broader changes across the codebase.
98-99
: Verify delimiter consistency across the event processing pipeline, sensei!
While the changes look correct, we should ensure the new SQL_FELT_DELIMITER
is compatible with:
- How events are generated upstream
- How events are consumed downstream
- Any existing stored events in the database
Let's check for consistency across the codebase:
Also applies to: 105-106
✅ Verification successful
The SQL_FELT_DELIMITER is consistently used across the event processing pipeline, sensei!
The verification shows that:
- The delimiter is defined as a constant (
/
) intorii_core/src/constants.rs
- It's consistently used across all event processing components:
- Event keys and data processing in gRPC subscriptions
- GraphQL event handling
- SQL utilities for felt conversions
- Entity and event message processing
- ERC token balance tracking
The changes in lines 98-99 and 105-106 align perfectly with the codebase-wide usage pattern, maintaining consistency in both upstream event generation and downstream consumption.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of SQL_FELT_DELIMITER across the event processing pipeline
# Check for any remaining FELT_DELIMITER usage
echo "Checking for any remaining FELT_DELIMITER usage..."
rg "FELT_DELIMITER" --type rust
# Check for SQL_FELT_DELIMITER usage in related files
echo "Checking SQL_FELT_DELIMITER usage in event processing..."
rg "SQL_FELT_DELIMITER" --type rust -A 3 -B 3
# Check for potential delimiter usage in SQL queries
echo "Checking delimiter usage in SQL queries..."
fd -e sql . | xargs rg "DELIMITER|split|join"
Length of output: 15664
crates/torii/grpc/src/server/subscriptions/event_message.rs (2)
16-16
: Ohayo sensei! Import change looks good!
The switch to SQL_FELT_DELIMITER
aligns well with the codebase-wide standardization effort.
131-132
: Verify delimiter consistency across event processing chain
The change to SQL_FELT_DELIMITER
looks correct, but let's ensure this matches the delimiter used when creating these event keys.
✅ Verification successful
All event key processing consistently uses SQL_FELT_DELIMITER
The verification shows that all key processing across the codebase consistently uses SQL_FELT_DELIMITER
which is defined as "/" in core/src/constants.rs
. The delimiter is used uniformly for:
- Key splitting in GraphQL (
graphql/src/object/event.rs
) - Key joining in SQL utils (
core/src/sql/utils.rs
) - Key processing in gRPC subscriptions (
grpc/src/server/subscriptions/*
)
No inconsistencies or old delimiters were found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent delimiter usage across event processing chain
# Look for other occurrences of key generation/processing
rg -A 3 "keys.*split|keys.*join" --type rust
# Check if there are any remaining uses of the old delimiter
rg "FELT_DELIMITER" --type rust
Length of output: 5929
crates/torii/grpc/src/server/subscriptions/entity.rs (2)
16-16
: Ohayo! Import change looks good, sensei!
The switch to SQL_FELT_DELIMITER
from the constants module aligns well with the codebase standardization effort.
119-120
: Verify delimiter consistency across key processing, sensei!
While the changes look good, let's ensure the new SQL_FELT_DELIMITER
matches the delimiter used in key generation and other processing points.
✅ Verification successful
Delimiter usage is consistent across key processing, sensei!
The verification shows that SQL_FELT_DELIMITER
is consistently defined as "/" in crates/torii/core/src/constants.rs
and used uniformly across:
- Key processing in subscriptions:
crates/torii/grpc/src/server/subscriptions/entity.rs
crates/torii/grpc/src/server/subscriptions/event.rs
crates/torii/grpc/src/server/subscriptions/event_message.rs
- Key generation and storage:
crates/torii/core/src/sql/utils.rs
crates/torii/core/src/sql/erc.rs
- Key querying:
crates/torii/graphql/src/object/event.rs
crates/torii/graphql/src/query/mod.rs
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent delimiter usage across the codebase
# Search for key processing and delimiter usage
echo "Checking key processing patterns..."
rg -A 2 "split|join.*SQL_FELT_DELIMITER" --type rust
echo "Checking key generation patterns..."
rg -A 2 "to_string.*SQL_FELT_DELIMITER" --type rust
# Check if old delimiter is still used somewhere
echo "Checking for old delimiter usage..."
rg "FELT_DELIMITER" --type rust
Length of output: 34077
crates/torii/graphql/src/query/mod.rs (2)
11-11
: Ohayo! Nice move centralizing the constants, sensei! 🎯
Moving SQL_FELT_DELIMITER
to the constants module improves code organization and maintainability.
188-188
: The constant usage looks perfect, sensei! Let's verify consistency across the codebase.
The felt array splitting logic is maintained while using the new centralized constant.
✅ Verification successful
Perfect consistency across the codebase, sensei! The constant migration is complete.
The verification shows that:
- All references now use
SQL_FELT_DELIMITER
fromtorii_core::constants
- No lingering references to the old
FELT_DELIMITER
constant - The constant is consistently used for felt array operations across all relevant modules
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of SQL_FELT_DELIMITER across the codebase
# and ensure no lingering references to the old FELT_DELIMITER
echo "Checking for any remaining references to old FELT_DELIMITER..."
rg "FELT_DELIMITER" --type rust
echo "Verifying consistent usage of new SQL_FELT_DELIMITER..."
rg "SQL_FELT_DELIMITER" --type rust
Length of output: 5805
crates/torii/core/src/sql/erc.rs (2)
11-11
: Ohayo! The import change looks good, sensei!
The update to use SQL_FELT_DELIMITER
aligns with the codebase-wide constant renaming effort.
111-112
: Verify delimiter consistency and migration strategy, sensei!
The balance ID formatting changes look correct, but we should ensure:
- The new
SQL_FELT_DELIMITER
value matches the old delimiter to maintain compatibility with existing balance records - If the delimiter value has changed, a migration strategy is needed
Also applies to: 122-123
✅ Verification successful
All good, sensei! The delimiter is consistent across the codebase
The verification shows:
- Only
SQL_FELT_DELIMITER
is used throughout the codebase - no oldFELT_DELIMITER
constant exists - The delimiter is consistently defined as "/" in
crates/torii/core/src/constants.rs
- All balance ID formatting and parsing operations use this same constant
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check delimiter consistency across the codebase
# Check if the old FELT_DELIMITER is still used anywhere
rg "FELT_DELIMITER" --type rust
# Check all occurrences of SQL_FELT_DELIMITER to verify consistent usage
rg "SQL_FELT_DELIMITER" --type rust -A 2
Length of output: 8828
crates/torii/core/src/sql/utils.rs (2)
Line range hint 37-297
: Ohayo sensei! The I256 implementation is rock solid! 🎯
The unchanged I256 implementation demonstrates excellent practices:
- Comprehensive test coverage including edge cases
- Clear and correct handling of signed arithmetic
- Well-documented test cases
8-8
: Ohayo sensei! The delimiter changes look good, but let's verify the trailing delimiter.
The standardization of using SQL_FELT_DELIMITER
improves code clarity. However, I notice we're adding an extra delimiter at the end in felts_to_sql_string
. Let's verify if this trailing delimiter is expected by consumers of this function.
Also applies to: 11-12, 33-33
✅ Verification successful
Ohayo sensei! The trailing delimiter is intentionally used for string splitting.
Looking at the codebase, I found that sql_string_to_felts
uses split(SQL_FELT_DELIMITER)
to parse the string back into felts. The trailing delimiter in felts_to_sql_string
is correct and necessary to maintain consistency with how the values are split and processed throughout the codebase, particularly in:
sql_string_to_felts
implementation that splits on the delimiter- GraphQL query processing that relies on this format
- Event message and entity processing in the gRPC server
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if any code relies on the trailing delimiter
# Search for direct usage of felts_to_sql_string
rg -A 3 "felts_to_sql_string"
# Search for SQL queries that might process the output
rg -A 3 "SELECT.*FROM.*WHERE.*IN.*\("
Length of output: 6031
Script:
#!/bin/bash
# Let's check the implementation and usage of sql_string_to_felts to understand the delimiter handling
ast-grep --pattern 'fn sql_string_to_felts'
# Also search for tests that might verify the delimiter behavior
rg -A 10 "sql_string_to_felts|felts_to_sql_string.*test"
# Check if there are any SQL queries that use string splitting on felt values
rg -A 3 "split|SPLIT.*SQL_FELT_DELIMITER"
Length of output: 43890
crates/torii/core/src/executor/erc.rs (2)
16-16
: Ohayo! Clean constant renaming, sensei! ٩(◕‿◕。)۶
The renamed constants SQL_FELT_DELIMITER
and IPFS_CLIENT_MAX_RETRY
better reflect their specific purposes, improving code clarity.
Also applies to: 19-19
Line range hint 230-234
: Verify IPFS retry configuration, sensei! (⌐■_■)
The code now uses IPFS_CLIENT_MAX_RETRY
for IPFS content fetching. Let's verify the constant's value to ensure optimal retry behavior.
✅ Verification successful
Ohayo! The IPFS retry configuration looks solid, sensei! (⌐■_■)
The constant IPFS_CLIENT_MAX_RETRY
is set to 3 retries, which is a reasonable default value for IPFS content fetching. It's consistently used across the codebase in all IPFS-related operations:
- Metadata fetching in ERC executor
- General IPFS content fetching utility
- Metadata update processor
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the IPFS_CLIENT_MAX_RETRY constant value and its usage
# Search for the constant definition
rg -p "const IPFS_CLIENT_MAX_RETRY\s*=\s*\d+" --type rust
# Check for other usages of the constant
rg "IPFS_CLIENT_MAX_RETRY" --type rust
Length of output: 974
let client = IpfsClient::from_str(IPFS_CLIENT_URL)? | ||
.with_credentials(IPFS_CLIENT_USERNAME, IPFS_CLIENT_PASSWORD); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Hardcoded credentials detected in source code
Ohayo sensei! I found a security concern that needs attention:
- The IPFS client credentials are hardcoded in
crates/torii/core/src/constants.rs
:pub const IPFS_CLIENT_USERNAME: &str = "2EBrzr7ZASQZKH32sl2xWauXPSA"; pub const IPFS_CLIENT_PASSWORD: &str = "12290b883db9138a8ae3363b6739d220";
These credentials should be moved to environment variables or a secure configuration system instead of being hardcoded in the source code. This is important for:
- Security: Preventing credential exposure in version control
- Flexibility: Allowing different credentials in different environments
- Best practices: Following secure credential management principles
🔗 Analysis chain
Verify credentials handling in production environment
While the code structure is good, let's ensure proper credential management.
The previous search had an issue with file type recognition. Let's broaden our search to find configuration files and credential patterns across the codebase to verify proper credential management.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if credentials are properly configured in environment files
rg -l "IPFS_CLIENT_(USERNAME|PASSWORD)" --type yaml --type env
Length of output: 92
Script:
#!/bin/bash
# Let's check for environment-related files and configuration patterns
fd -e yaml -e yml -e env -e example
# Check for IPFS credential patterns in all files
rg "IPFS_CLIENT_(USERNAME|PASSWORD)" -A 2 -B 2
# Look for environment variable documentation
fd "README|ENV" -i
Length of output: 2299
pub const IPFS_URL: &str = "https://ipfs.io/ipfs/"; | ||
pub const IPFS_CLIENT_MAX_RETRY: u8 = 3; | ||
|
||
pub const IPFS_CLIENT_URL: &str = "https://ipfs.infura.io:5001"; | ||
pub const IPFS_CLIENT_USERNAME: &str = "2EBrzr7ZASQZKH32sl2xWauXPSA"; | ||
pub const IPFS_CLIENT_PASSWORD: &str = "12290b883db9138a8ae3363b6739d220"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In sozo we moved this to be read from env variables, and default to cartridge's if nothing is found. Maybe something we want here in a future, to enable usage of custom IPFS nodes.
clippy and tests and should be ready. 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
crates/torii/server/src/artifacts.rs (2)
Line range hint
207-220
: Ohayo sensei! Let's enhance the IPFS handling! 🚀While the IPFS fetching works correctly, we could improve the code structure and error handling:
- Consider extracting the IPFS-specific image processing into a separate function for better maintainability
- Add more specific error context for IPFS failures, including gateway information
Here's a suggested refactor:
- let response = fetch_content_from_ipfs(cid, IPFS_CLIENT_MAX_RETRY) - .await - .context("Failed to read image bytes from IPFS response")?; - - if response.starts_with(b"<svg") || response.starts_with(b"<?xml") { - ErcImageType::Svg(response.to_vec()) - } else { - let format = image::guess_format(&response).with_context(|| { - format!( - "Unknown file format for token_id: {}, cid: {}, data: {:?}", - token_id, cid, &response - ) - })?; - ErcImageType::DynamicImage(( - image::load_from_memory(&response) - .context("Failed to load image from bytes")?, - format, - )) - } + process_ipfs_image(cid, token_id).await? + async fn process_ipfs_image(cid: &str, token_id: &str) -> Result<ErcImageType> { + let response = fetch_content_from_ipfs(cid, IPFS_CLIENT_MAX_RETRY) + .await + .with_context(|| format!("Failed to fetch image from IPFS gateway for CID: {}", cid))?; + + if response.starts_with(b"<svg") || response.starts_with(b"<?xml") { + Ok(ErcImageType::Svg(response.to_vec())) + } else { + let format = image::guess_format(&response) + .with_context(|| format!("Invalid image format for token_id: {} (CID: {})", token_id, cid))?; + + let image = image::load_from_memory(&response) + .with_context(|| format!("Failed to decode image for token_id: {} (CID: {})", token_id, cid))?; + + Ok(ErcImageType::DynamicImage((image, format))) + } + }
Line range hint
150-316
: Ohayo sensei! Let's discuss the architecture! 🏗️The
fetch_and_process_image
function is handling multiple responsibilities:
- Metadata fetching
- Image source determination
- Image processing
- File system operations
Consider splitting this into smaller, more focused functions using the Command pattern:
trait ImageProcessor { async fn process(&self, token_id: &str) -> Result<String>; } struct HttpImageProcessor; struct IpfsImageProcessor; struct DataUriImageProcessor; struct ImageProcessorFactory { fn create_processor(uri: &str) -> Box<dyn ImageProcessor>; }This would make the code more maintainable and testable.
Would you like me to help create a detailed implementation plan for this refactoring?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
crates/torii/server/src/artifacts.rs
(2 hunks)
🔇 Additional comments (1)
crates/torii/server/src/artifacts.rs (1)
18-18
: Ohayo sensei! LGTM on the constants refactor! 🎯
The change to use IPFS_CLIENT_MAX_RETRY
from the centralized constants improves code organization and makes the retry limit's purpose more explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
crates/torii/cli/src/args.rs (1)
Line range hint
294-314
: Ohayo sensei! Consider improving test readability with helper functionsWhile the assertions are correct and comprehensive, we could make the test more maintainable by extracting the expected values into helper functions or constants.
Here's a suggested improvement:
+ const EXPECTED_CONTRACTS: Vec<Contract> = vec![ + Contract { + address: Felt::from_str("0x1234").unwrap(), + r#type: ContractType::ERC20 + }, + Contract { + address: Felt::from_str("0x5678").unwrap(), + r#type: ContractType::ERC721 + } + ]; // In the test function - assert_eq!( - torii_args.indexing.contracts, - vec![ - Contract { - address: Felt::from_str("0x1234").unwrap(), - r#type: ContractType::ERC20 - }, - Contract { - address: Felt::from_str("0x5678").unwrap(), - r#type: ContractType::ERC721 - } - ] - ); + assert_eq!(torii_args.indexing.contracts, EXPECTED_CONTRACTS);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
crates/torii/cli/src/args.rs
(2 hunks)crates/torii/cli/src/options.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/torii/cli/src/options.rs
🔇 Additional comments (1)
crates/torii/cli/src/args.rs (1)
260-260
: Ohayo sensei! LGTM - Raw events configuration update looks good!
The change to set raw = true
in the test configuration aligns well with testing the raw events flag behavior.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2702 +/- ##
==========================================
+ Coverage 56.17% 56.25% +0.08%
==========================================
Files 411 410 -1
Lines 52958 52846 -112
==========================================
- Hits 29749 29731 -18
+ Misses 23209 23115 -94 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some few other comments, all good to merge after that. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
crates/torii/graphql/src/object/entity.rs (1)
Line range hint
183-184
: Consider standardizing all identifier formats, sensei!While we're standardizing model identifiers with
get_tag
, I notice that table names in this function still use$
as a separator. Consider standardizing this format as well for consistency.crates/torii/graphql/src/object/model_data.rs (1)
110-121
: Clean implementation, but let's optimize those queries, sensei!The switch to
get_tag
for table name generation is good. However, there's an opportunity to optimize the database queries.Consider batching the count and fetch queries to reduce database roundtrips. Here's a suggested approach:
- let total_count = count_rows(&mut conn, &table_name, &None, &filters).await?; - let (data, page_info) = fetch_multiple_rows( + let (total_count, (data, page_info)) = tokio::try_join!( + count_rows(&mut conn, &table_name, &None, &filters), + fetch_multiple_rows( &mut conn, &table_name, EVENT_ID_COLUMN, &None, &order, &filters, &connection, - total_count, + 0, // Will be updated after count ) - .await?; + )?;This change would allow both queries to execute concurrently, potentially reducing the response time.
crates/torii/graphql/src/tests/subscription_test.rs (1)
85-85
: Consider usingget_tag
consistently across the testWhile the struct name now uses
get_tag
, the test still usesutils::type_name_from_names
for thetype_name
variable. Consider usingget_tag
consistently for both cases to maintain uniformity in naming conventions.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (8)
crates/torii/core/src/model.rs
(2 hunks)crates/torii/core/src/processors/event_message.rs
(2 hunks)crates/torii/core/src/sql/mod.rs
(4 hunks)crates/torii/graphql/src/object/entity.rs
(2 hunks)crates/torii/graphql/src/object/event_message.rs
(2 hunks)crates/torii/graphql/src/object/model_data.rs
(3 hunks)crates/torii/graphql/src/tests/subscription_test.rs
(3 hunks)crates/torii/graphql/src/utils.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- crates/torii/graphql/src/utils.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/torii/core/src/processors/event_message.rs
- crates/torii/core/src/sql/mod.rs
🔇 Additional comments (11)
crates/torii/graphql/src/object/event_message.rs (2)
6-6
: Ohayo! Clean import addition for standardized naming.
The addition of get_tag
aligns with the codebase-wide standardization of model identifier generation.
154-154
: Consistent model name generation, sensei!
The switch from string formatting to get_tag
standardizes model name generation across the codebase, making it more maintainable and less prone to inconsistencies.
Let's verify this standardization across the codebase:
✅ Verification successful
Let me gather more information to verify if there are any remaining string concatenations for model names.
Ohayo! The model name generation is indeed standardized, sensei!
Based on the search results, get_tag
is consistently used for model tag generation in the core functionality. The remaining format!
usages serve different purposes:
- Computing selector IDs
- Generating camel/pascal case names for GraphQL
- Creating SQL-specific identifiers
The switch to get_tag
in the reviewed code aligns perfectly with the standardized approach used throughout the codebase for model name generation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of get_tag for model name generation
# Check for any remaining direct string concatenation of namespace-name
rg -g '*.rs' 'format!\("{.*}-{.*}"\)'
# Check for adoption of get_tag
rg -g '*.rs' 'get_tag\(&.*,\s*&.*\)'
Length of output: 1090
Script:
#!/bin/bash
# Search for potential string formatting patterns for model names
rg -g '*.rs' 'format!.*namespace.*name'
# Search for direct string concatenation patterns
rg -g '*.rs' '["\'].*-.*["\']'
# Look for potential model name construction patterns
ast-grep --pattern 'format!($$$)'
Length of output: 922
crates/torii/graphql/src/object/entity.rs (2)
7-7
: Ohayo! Clean import addition.
The import of get_tag
from dojo_types::naming
is properly placed and aligns with the codebase's standardization efforts.
148-148
: Verify backward compatibility with existing model identifiers, sensei!
The switch from string formatting to get_tag
for model identifier generation is a good standardization move. However, we should ensure this change maintains compatibility with existing data.
✅ Verification successful
The previous search might have been too specific. Let's broaden our search to understand the context better and verify backward compatibility.
Ohayo sensei! The get_tag
function maintains backward compatibility
The get_tag
function is a well-established standardized way to generate model identifiers across the codebase. The evidence shows:
- It's used consistently throughout the codebase for model identification
- The format is stable and well-tested (
namespace-name
) - The function is part of core types (
dojo_types::naming
) and has extensive test coverage - Existing code relies on this format for model lookups, storage, and events
The switch to using get_tag
is safe and maintains backward compatibility with existing model identifiers.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any existing model identifiers that might be affected by the new format
# Search for existing model identifier patterns in SQL queries
rg -A 2 "FROM \[.*-.*\]"
# Look for other instances where we might need to update the tag format
ast-grep --pattern 'format!("$_-$_")'
Length of output: 66
Script:
#!/bin/bash
# Let's check the implementation of get_tag and related usages
ast-grep --pattern 'fn get_tag($$$)'
# Search for any existing tag-related functions or patterns
rg -g '*.rs' -A 3 'tag|Tag' --ignore-case
# Look for any model identifier related tests
fd test -e rs -x rg -l 'tag|identifier|namespace' {}
Length of output: 236366
crates/torii/graphql/src/object/model_data.rs (2)
4-4
: Ohayo! Clean import addition, sensei!
The addition of get_tag
from dojo_types::naming
aligns with the refactoring effort to standardize naming conventions across the codebase.
73-73
: Clean replacement of naming function, sensei!
The switch to get_tag
standardizes the type name generation. Let's verify consistent usage across the codebase.
✅ Verification successful
Ohayo! The switch to get_tag is consistently applied across the codebase, sensei!
The verification shows that:
- No instances of
struct_name_from_names
remain in the codebase get_tag
is consistently used across core and graphql modules for namespace and model name handling- The usage pattern is uniform in all occurrences
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of get_tag vs struct_name_from_names
# Expected: No remaining instances of struct_name_from_names
# Check for any remaining usage of struct_name_from_names
rg "struct_name_from_names"
# Check for consistent usage of get_tag
rg "get_tag.*namespace.*model"
Length of output: 997
crates/torii/graphql/src/tests/subscription_test.rs (3)
8-8
: Ohayo sensei! Import change looks good!
The addition of get_tag
from dojo_types::naming
aligns with the refactoring to use a centralized naming convention.
237-237
: Same naming utility inconsistency as in the previous test
Similar to the previous test, consider using get_tag
consistently for both struct name and type name generation.
Line range hint 1-577
: Verify complete migration to get_tag
Let's verify that all relevant occurrences of the old naming utility have been replaced with get_tag
.
crates/torii/core/src/model.rs (2)
6-6
: Ohayo sensei! LGTM on the new import!
The get_tag
import is well-placed and aligns with the codebase's import organization.
214-217
: Ohayo sensei! The get_tag refactor looks good, but let's verify the impact!
The switch from string formatting to get_tag
standardizes name generation, which is a good practice. However, we should verify that this change maintains backward compatibility with existing model names.
✅ Verification successful
Let me gather more specific information about the get_tag
function usage.
Ohayo sensei! The get_tag refactor is safe and consistent!
The get_tag
function is consistently implemented across the codebase to format model names with the pattern {namespace}{TAG_SEPARATOR}{name}
. The refactor in the review:
- Replaces string formatting with the standardized
get_tag
function - Maintains the same namespace-name format (
namespace-name
) - Is used consistently for model name generation across different components (GraphQL, Core, World)
The function definition in naming.rs
confirms it produces identical output to the previous string formatting:
pub fn get_tag(namespace: &str, name: &str) -> String {
format!("{namespace}{TAG_SEPARATOR}{name}")
}
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the impact of get_tag on existing model names
# Test: Search for existing model name patterns to ensure compatibility
# Look for existing model name patterns
echo "Checking existing model name patterns..."
rg -g '*.{rs,sql}' '[a-zA-Z0-9_]+-[a-zA-Z0-9_]+' --type rust --type sql
# Look for potential model name generation patterns
echo "Checking model name generation patterns..."
ast-grep --pattern 'format!("{}-{}", $_, $_)'
Length of output: 66036
Script:
#!/bin/bash
# Search for get_tag function definition and usage
echo "Searching for get_tag function definition..."
ast-grep --pattern 'fn get_tag($_, $_) { $$$_ }'
echo "Searching for get_tag function calls..."
rg "get_tag\(" -A 2 -B 2
Length of output: 6604
Summary by CodeRabbit
Release Notes
New Features
PENDING_BLOCKS
, enhancing the engine's indexing capabilities.EventProcessorConfig
struct for evaluating historical events and indexing criteria.Bug Fixes
Chores
QueryQueue
struct and related methods for managing SQL queries, streamlining the codebase.raw
field inEventsOptions
fromtrue
tofalse
, altering default event indexing behavior.SQL_FELT_DELIMITER
for consistency in processing.