-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(torii): index whitelisted erc20/erc721 #2442
Conversation
Caution Review failedThe pull request is closed. WalkthroughOhayo, sensei! This pull request enhances the application by introducing new command-line arguments for specifying ERC contract addresses and their configurations. It modifies the processing logic for ERC20 and ERC721 token transfers, adds a new configuration file for ERC contracts, and expands the database schema to manage token balances and transfers, thus improving the overall functionality related to ERC tokens. Changes
Possibly related PRs
Suggested reviewers
TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Outside diff range and nitpick comments (5)
scripts/deploy_erc20_katana.sh (1)
3-3
: Enhance script readability with commentsSensei, adding comments to explain the purpose of each parameter in the
starkli deploy
command will improve the script's maintainability and help others understand its functionality.crates/torii/core/src/processors/erc20_transfer.rs (1)
14-58
: Consider adding unit tests forErc20TransferProcessor
Sensei, to ensure the robustness of the
Erc20TransferProcessor
, consider adding unit tests that cover various scenarios, including valid events, invalid events, and error handling. This will help maintain code quality and catch potential issues early.crates/torii/core/src/sql/mod.rs (3)
35-35
: Updating test module path totest.rs
The test module path has been updated from
sql_test.rs
totest.rs
. Ensure that this change is reflected in any documentation or build scripts that reference the test module.Confirm that all tests are still properly located and executed with the new module path.
39-39
: Review the necessity of publicly re-exporting theutils
moduleThe line
pub use utils::*;
publicly exposes all items from theutils
module. If these utilities are intended for internal use only within thesql
module, consider removing thepub
keyword to encapsulate the implementation details.This change can help maintain a clean public API and prevent unintended usage of internal utilities.
742-746
: Simplify conditional expression inbuild_set_entity_queries_recursive
The conditional expression can be simplified for readability.
Consider rewriting the condition as:
if let Ty::Tuple(t) = &o.ty { t.is_empty() } else { false }This makes the intent clearer and improves code readability.
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (15)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (12 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (6 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files skipped from review due to trivial changes (1)
- bin/torii/torii.toml
Additional comments not posted (20)
scripts/deploy_erc20_katana.sh (1)
3-3
: Consider securing your keystore with a passwordSensei, the script uses an empty password for the keystore (
--keystore-password ""
), which may pose a security risk if the keystore contains sensitive information. It's recommended to secure the keystore with a strong password to protect your private keys.[security]
scripts/send_erc20_transfer.sh (1)
12-12
: Consider securing your keystore with a passwordSensei, the script uses an empty password for the keystore (
--keystore-password ""
), which may pose a security risk. It's advisable to secure the keystore with a strong password to protect your private keys.[security]
crates/torii/migrations/20240913104418_add_erc.sql (1)
28-29
: Consider adding indexes onfrom_address
andto_address
Sensei, to improve query performance when filtering or joining on
from_address
andto_address
, consider adding indexes on these columns in theerc_transfers
table.[performance]
Apply this SQL to add the indexes:
CREATE INDEX erc_transfers_from_address ON erc_transfers (from_address); CREATE INDEX erc_transfers_to_address ON erc_transfers (to_address);crates/torii/core/Cargo.toml (1)
41-41
: Dependency addition looks goodSensei, adding the
toml
crate to the workspace dependencies is appropriate and will allow for consistent use across the project.crates/torii/core/src/processors/erc20_legacy_transfer.rs (2)
26-34
: Ohayo, sensei! Validation Logic is SolidThe
validate
method appropriately checks that the event has exactly one key and four data elements, aligning with the expected structure of a legacy ERC20 transfer event. This ensures that only correctly formatted events are processed.
36-57
: Efficient Handling of Legacy ERC20 Transfer EventsThe
process
method correctly extracts thefrom
,to
, andvalue
fields from the event data. Good job on deserializing the value properly and logging the transfer details. This will help in accurate tracking of token transfers.crates/torii/core/src/processors/erc721_transfer.rs (2)
26-35
: Ohayo, sensei! Proper Validation for ERC721 EventsThe
validate
method correctly checks that the event has five keys and no data, which matches the expected structure of an ERC721 transfer event. This validation ensures that only relevant events are processed.
46-52
: Ensure Correct Extraction offrom
,to
, andtoken_id
FieldsPlease verify that the indices used to extract
from
,to
, andtoken_id
fromevent.keys
match the ERC721 event structure. Specifically, confirm that:
event.keys[1]
corresponds to thefrom
address.event.keys[2]
corresponds to theto
address.event.keys[3..5]
are correctly used to deserialize thetoken_id
.crates/torii/core/src/processors/mod.rs (2)
15-17
: Ohayo, sensei! Modules Exported SuccessfullyUncommenting the module declarations for
erc20_legacy_transfer
,erc20_transfer
, anderc721_transfer
correctly includes these processors in the processing logic. This expansion allows the system to handle a wider range of events related to ERC token transfers.
84-90
: Correct Modification to Support Multiple Processors per Event KeyChanging the
generate_event_processors_map
function to useVec<Box<dyn EventProcessor<P>>>
allows multiple processors to be associated with the same event key. This modification enhances flexibility in event handling.Please verify that all parts of the codebase that interact with this function can accommodate the updated return type and structure.
bin/torii/src/main.rs (1)
Line range hint
122-145
: Ohayo, sensei! Conflicting Arguments Managed ProperlyThe introduction of the
erc_contracts
parameter with a conflict against theconfig
parameter is handled correctly. This prevents ambiguous configurations and ensures that only one source of ERC contract configuration is used at a time.crates/torii/core/src/engine.rs (5)
9-9
: Ohayo, sensei! Preparing for concurrent event fetching withjoin_all
The addition of
use futures_util::future::join_all;
allows for concurrent execution of futures, which can significantly improve the performance of fetching events from multiple sources.
521-530
: Ohayo, sensei! Ensure thread safety when merging local databases in parallel tasksIn the
process_tasks
method, each task processes events using alocal_db
, which is then merged intoself.db
. Ensure that themerge
function is thread-safe and can handle concurrent modifications without causing data races or deadlocks.Please confirm that
self.db.merge(local_db)?;
is safe in a concurrent context. If necessary, consider adding synchronization mechanisms or using thread-safe data structures to protect shared resources during the merge.
604-606
: Filtering events to process only relevant onesThe added condition ensures that only events from the world contract or the specified tokens are processed. This helps in skipping unrelated events and improves processing efficiency.
Line range hint
710-747
: Verify task identification logic for parallelized event processingThe
task_identifier
is determined based on the event data for specific event keys. This mechanism is used to group events for parallel processing.Please confirm that this approach correctly identifies tasks and that events are grouped appropriately. Also, consider documenting the rationale behind using specific event keys for task identification for clarity.
756-780
: Addition ofget_all_events
function for comprehensive event retrievalThe new
get_all_events
function properly handles pagination by looping until all events are retrieved. This ensures that all events are collected, which is essential for processing large numbers of events without missing any.crates/torii/core/src/sql/mod.rs (4)
22-23
: Ohayo, sensei! Importing necessary types for ERC contract handlingThe addition of imports for
ErcContract
,Event
,EventMessage
, andModel
ensures that the types needed for the new functionality are available.
33-33
: Introducing theerc
module for ERC contract supportThe inclusion of
mod erc;
indicates the introduction of a new module to handle ERC contracts. This modular approach keeps the codebase organized and maintainable.
61-65
: Ohayo, sensei! ExtendingSql::new
to initialize with ERC contractsThe
Sql::new
function now acceptserc_contracts
to insert ERC contract data into the database during initialization. This change allows the system to recognize and manage ERC contracts effectively from the start.
79-91
: Inserting ERC contracts into thecontracts
tableThe code correctly iterates through
erc_contracts
and inserts each contract into thecontracts
table if it doesn't already exist. This ensures that all relevant ERC contracts are accounted for in the database.
@lambda-0x please consider indexing ERC-1155 as well |
@rsodre thanks for the feedback we have designed tables in such a way that it should be straight forward to add, so will try to add it in this PR itself. Although I am curious how does game use ERC1155, do they mainly need one ERC20 and multiplle ERC721, or just multiple ERC20s? |
d2b7c23
to
d53a2e4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range and nitpick comments (1)
crates/torii/core/src/engine.rs (1)
Line range hint
672-749
: Ensure Catch-All Event Processor InvocationOhayo, sensei! The current implementation might skip processing events if no specific processors are found, even when the
catch_all_event
processor is meant to handle them.Adjust the logic to ensure the
catch_all_event
processor is invoked appropriately:let Some(processors) = self.processors.event.get(&event_key) else { // If we don't have a processor for this event key + if self.processors.catch_all_event.validate(event) { + if let Err(e) = self + .processors + .catch_all_event + .process( + &self.world, + &mut self.db, + block_number, + block_timestamp, + event_id, + event, + ) + .await + { + error!(target: LOG_TARGET, error = %e, "Processing catch-all event."); + } + } else { + // Log unprocessed events if necessary + } return Ok(()); };This ensures that events without specific processors are still handled by the
catch_all_event
processor.
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (18)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (13 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/processors/store_set_record.rs (1 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (5 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/libp2p/src/server/mod.rs (1 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files not reviewed due to no reviewable changes (1)
- crates/torii/core/src/lib.rs
Files skipped from review due to trivial changes (4)
- bin/torii/torii.toml
- crates/torii/core/Cargo.toml
- crates/torii/core/src/processors/store_set_record.rs
- crates/torii/libp2p/src/server/mod.rs
Files skipped from review as they are similar to previous changes (10)
- bin/torii/src/main.rs
- crates/torii/core/src/processors/erc20_legacy_transfer.rs
- crates/torii/core/src/processors/erc20_transfer.rs
- crates/torii/core/src/processors/erc721_transfer.rs
- crates/torii/core/src/processors/mod.rs
- crates/torii/core/src/sql/utils.rs
- crates/torii/core/src/types.rs
- crates/torii/migrations/20240913104418_add_erc.sql
- scripts/deploy_erc20_katana.sh
- scripts/send_erc20_transfer.sh
Additional comments not posted (2)
crates/torii/core/src/engine.rs (1)
518-527
: Potential Task Execution Issue with Semaphore AcquisitionOhayo, sensei! In the
process_tasks
method, acquiring a semaphore permit inside the spawned task may not effectively limit the number of concurrent tasks, leading to potential resource exhaustion.Consider acquiring the semaphore permit before spawning the task to ensure proper concurrency control:
for (task_id, events) in self.tasks.drain() { let db = self.db.clone(); let world = self.world.clone(); let processors = self.processors.clone(); let semaphore = semaphore.clone(); + let permit = semaphore.acquire().await.unwrap(); set.spawn(async move { - let _permit = semaphore.acquire().await.unwrap(); + let _permit = permit; // Keep the permit alive for the task duration // Task code... }); }This modification ensures that the semaphore controls the number of tasks being spawned, preventing too many concurrent tasks from executing.
Likely invalid or redundant comment.
crates/torii/core/src/sql/mod.rs (1)
742-746
: Correct Enum Handling in Recursive Query BuildingOhayo, sensei! In the
build_set_entity_queries_recursive
method, the condition for skipping empty enum options may incorrectly return early, causing code that should execute to be skipped.Modify the condition to prevent unintentional early returns:
if e.options.iter().all( |o| match &o.ty { - Ty::Tuple(t) => t.is_empty(), + Ty::Tuple(t) => t.is_empty(), + _ => false, }, ) { return; }This adjustment ensures that only enums with all empty tuple options are skipped, allowing proper processing of enums with meaningful variants.
Likely invalid or redundant comment.
fc56768
to
83ac6d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Outside diff range and nitpick comments (5)
crates/torii/core/src/sql/test.rs (1)
48-52
: Ohayo sensei! The code changes look good to me.The shift from
Arc::new
toBox::new
for wrapping the processors is a valid change in ownership semantics.However, please consider the performance implications of using
Box::new
instead ofArc::new
. WhileBox::new
allocates on the heap,Arc::new
provides reference counting, which may be more suitable in certain scenarios. Ensure that the chosen approach aligns with the performance requirements of the codebase.crates/torii/core/src/engine.rs (3)
Line range hint
232-349
: Ohayo sensei! The changes tofetch_range
are looking great.Fetching events concurrently for the world contract and ERC tokens using
join_all
is a significant performance improvement. The processing of fetched events to extract transactions and blocks is handled correctly.One suggestion for improvement:
Consider collecting the results of each task and handling errors individually. This would allow the function to continue processing events from other contracts even if one task fails, making the engine more resilient to transient errors.
Line range hint
351-372
: Ohayo sensei! Thefetch_pending
function looks good.It correctly fetches the pending block data and handles the case when the pending block is not available.
Don't forget to address the TODO comment once the provider is updated to always return a pending block. Let me know if you need any help with that!
Line range hint
552-576
: Ohayo sensei! Theprocess_transaction_with_events
function looks good.It correctly processes each event associated with the transaction by calling the
process_event
function with the necessary parameters.I noticed that the transaction processor is commented out due to performance reasons. If it is no longer needed, consider removing the commented-out code to keep the codebase clean.
crates/torii/core/src/sql/mod.rs (1)
Line range hint
68-98
: Ohayo sensei! Offer assistance with unit tests forerc_contracts
handlingThe updated
new
method now acceptserc_contracts
and processes them to insert contracts into the database. To ensure this new functionality is reliable, it's important to have appropriate unit tests. I can help by generating unit tests to cover different scenarios, such as inserting multiple contracts and handling duplicates.Do you want me to generate the unit testing code or open a GitHub issue to track this task?
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (26)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (24 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/processors/store_set_record.rs (1 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (9 hunks)
- crates/torii/core/src/sql/query_queue.rs (1 hunks)
- crates/torii/core/src/sql/test.rs (5 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (3 hunks)
- crates/torii/graphql/src/tests/mod.rs (3 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (6 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (3 hunks)
- crates/torii/libp2p/src/server/mod.rs (1 hunks)
- crates/torii/libp2p/src/tests.rs (2 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files not reviewed due to no reviewable changes (1)
- crates/torii/core/src/lib.rs
Files skipped from review due to trivial changes (3)
- crates/torii/core/src/processors/store_set_record.rs
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql
- scripts/deploy_erc20_katana.sh
Files skipped from review as they are similar to previous changes (11)
- bin/torii/src/main.rs
- bin/torii/torii.toml
- crates/torii/core/Cargo.toml
- crates/torii/core/src/processors/erc20_transfer.rs
- crates/torii/core/src/processors/erc721_transfer.rs
- crates/torii/core/src/sql/erc.rs
- crates/torii/core/src/sql/utils.rs
- crates/torii/core/src/types.rs
- crates/torii/libp2p/src/server/mod.rs
- crates/torii/migrations/20240913104418_add_erc.sql
- scripts/send_erc20_transfer.sh
Additional comments not posted (36)
crates/torii/core/src/processors/erc20_legacy_transfer.rs (5)
1-10
: Ohayo sensei! The imports look good to me.The imported dependencies are relevant and necessary for the functionality implemented in this file. No unused imports detected.
12-15
: Ohayo sensei! The constant and struct definitions are on point.The
LOG_TARGET
constant follows the naming convention and will be useful for logging. TheErc20LegacyTransferProcessor
struct is derived with appropriate traits.
17-34
: Ohayo sensei! TheEventProcessor
trait implementation looks solid.The
event_key
method returns the correct key for ERC20 transfer events. Thevalidate
method properly checks the event structure to ensure it matches the expected format for ERC20 transfers.
36-57
: Ohayo sensei! Theprocess
method implementation is top-notch.The method correctly extracts relevant data from the event, handling the Cairo serialization format for the value. The interaction with the database and logging of transaction details are implemented properly.
58-58
: Ohayo sensei! The closing brace is in the right spot.The closing brace matches the opening brace of the
Erc20LegacyTransferProcessor
struct.crates/torii/core/src/processors/mod.rs (4)
12-17
: Ohayo sensei! The uncommented module declarations look good.Making the
erc20_legacy_transfer
,erc20_transfer
, anderc721_transfer
modules publicly accessible will allow them to be utilized in the processing logic. This change aligns with the objective of expanding functionality related to ERC20 and ERC721 token transfers.
82-82
: Ohayo sensei! TheEventProcessors<P>
type alias definition looks good.Defining
EventProcessors<P>
asVec<Box<dyn EventProcessor<P>>>
aligns with the changes made to thegenerate_event_processors_map
function. UsingBox
instead ofArc
indicates a shift in ownership semantics, allowing the function to own the event processors rather than using shared references.
86-87
: Ohayo sensei! The modifications to thegenerate_event_processors_map
function look good.Changing the parameter type to
EventProcessors<P>
and the return type toResult<HashMap<Felt, EventProcessors<P>>>
allows multiple processors to be associated with a single event key. This enhances the flexibility of event handling and enables more complex event processing scenarios.
92-92
: Ohayo sensei! The processor insertion logic looks good.Using
event_processors.entry(key).or_insert(vec![]).push(processor)
ensures that a new vector is created for each event key if it doesn't already exist, and the processor is added to the vector. This allows multiple processors to be associated with a single event key, enhancing the flexibility of event handling.crates/torii/graphql/src/tests/metadata_test.rs (3)
3-4
: Ohayo sensei! The import statement looks good to me.The
HashMap
import is valid and necessary for the code changes. Nice work!
106-106
: Ohayo sensei! The modifiedSql::new
instantiation looks good here too.The changes are consistent with the updated constructor signature. Please refer to the previous verification script to ensure all occurrences of
Sql::new
in the codebase match the new signature.
53-53
: Ohayo sensei! The modifiedSql::new
instantiation looks good.The changes are consistent with the updated constructor signature. Just make sure to update all other occurrences of
Sql::new
in the codebase to match the new signature.Run the following script to verify the constructor usage:
Verification successful
Ohayo sensei! I've thoroughly examined the usage of
Sql::new
across the codebase, and I'm pleased to report some good news!All instances of
Sql::new
are consistently using the new signature.Here's a quick summary of my findings:
- 13 occurrences of
Sql::new
were found across different files.- All instances use the new three-parameter signature:
Sql::new(pool.clone(), <some_value>, &HashMap::new())
.- The third parameter is consistently
&HashMap::new()
, which aligns perfectly with the change we're verifying.No further action is required. The codebase appears to be in harmony with the new
Sql::new
signature. Excellent work, sensei!Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all occurrences of `Sql::new` match the new signature. # Test: Search for the constructor usage. Expect: Only occurrences of the new signature. rg --type rust -A 5 $'Sql::new'Length of output: 7515
crates/torii/grpc/src/server/tests/entities_test.rs (4)
1-1
: Ohayo sensei! The import statement looks good.The
HashMap
import is necessary for the code changes in this file.
108-109
: Ohayo sensei! The addition of new event processors looks good.The
RegisterModelProcessor
andStoreSetRecordProcessor
are added correctly to the list of event processors.
121-121
: Ohayo sensei! Thefetch_range
method call looks good.The additional
HashMap
parameter is consistent with the method signature change.Run the following script to verify the method usage:
#!/bin/bash # Description: Verify all method calls to `fetch_range` match the new signature. # Test: Search for the method usage. Expect: Only occurrences of the new signature. rg --type rust -A 5 $'fetch_range'
99-99
: Ohayo sensei! TheSql::new
function call looks good.The additional
HashMap
parameter is consistent with the function signature change.Run the following script to verify the function usage:
Verification successful
Ohayo sensei! It seems our previous attempt didn't quite hit the mark. Let's try again with a corrected script to find what we're looking for!
Ohayo sensei! I've got some good news for you about our
Sql::new
function!The
Sql::new
function is being used correctly with the new signature throughout the codebase.After a thorough investigation, I can confirm that:
- All instances of
Sql::new
are called with three parameters: pool, world_address (or Felt::ZERO), and a HashMap.- The new HashMap parameter is consistently passed as
&HashMap::new()
, creating an empty HashMap in all cases.- There are no occurrences of the old signature (without the HashMap parameter) being used.
This consistency across the codebase indicates that the change has been properly implemented and there are no lingering issues with the function usage.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all function calls to `Sql::new` match the new signature. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg --type rust -A 5 $'Sql::new'Length of output: 2280
Script:
#!/bin/bash # Description: Verify all function calls to `Sql::new` match the new signature. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg --type rust -A 5 'Sql::new'Length of output: 7515
crates/torii/core/src/sql/query_queue.rs (1)
8-8
: Ohayo sensei! The new import looks good.The
felt_to_sql_string
function from theutils
module will likely be used to convert Felt values, such as token IDs or addresses, into SQL-friendly string representations. This is a crucial step in indexing ERC20 and ERC721 tokens, as it allows for efficient storage and retrieval of token-related data in the database.By converting Felt values to SQL strings, you can streamline the process of constructing SQL queries and managing data interactions with the database when dealing with token indexing. Keep up the great work, sensei!
crates/torii/graphql/src/tests/mod.rs (3)
1-1
: Ohayo sensei! The import looks good to me.The
HashMap
import fromstd::collections
is valid and necessary for later usage.
354-354
: Ohayo sensei! TheSql
instantiation changes look good.The additional parameter, a reference to a new
HashMap
, suggests improved configurability for theSql
instance. This could enable more flexible state management or data handling. The changes are valid and do not introduce any apparent issues.
363-365
: Ohayo sensei! The processor andfetch_range
changes look valid, but please verify the performance impact.The shift from using
Arc::new
toBox::new
for processor instances indicates a change in ownership semantics and memory management. While this is a valid approach, it's worth verifying if this change has any unintended performance implications, considering the trade-offs between reference counting and heap allocation.Additionally, passing a new
HashMap
reference toengine.fetch_range
aligns with the earlier modifications toSql::new
, maintaining consistency.Overall, the changes appear correct, but it's prudent to assess the performance impact of the processor memory management change.
Also applies to: 373-373, 377-377
crates/torii/core/src/sql/test.rs (2)
131-131
: Ohayo sensei! The code changes look good to me.The update to the
Sql::new
function call, including the new parameter of a reference to an emptyHashMap
, is a valid change that ensures a consistent empty state is passed to the function during initialization.
291-291
: Ohayo sensei! The code changes look good to me.The updates to the
Sql::new
function calls in bothtest_load_from_remote_del
andtest_update_with_set_record
functions, including the new parameter of a reference to an emptyHashMap
, are valid changes that ensure a consistent empty state is passed to the function during initialization.Also applies to: 381-381
crates/torii/graphql/src/tests/subscription_test.rs (2)
3-3
: Ohayo sensei! TheHashMap
import looks good to me.The
HashMap
type is a useful data structure for storing key-value pairs and is commonly used in Rust projects. The import statement follows Rust's naming conventions and is valid.
25-25
: Ohayo sensei! TheSql
instantiation with the newHashMap
parameter looks good.The additional
&HashMap::new()
parameter allows for more flexibility in configuring theSql
object. This change suggests that theSql
object now supports additional configuration or metadata through theHashMap
parameter.To ensure consistency, please run the following script to verify that all instantiations of the
Sql
object have been updated with the new parameter:Verification successful
Ohayo sensei! The
Sql::new
update is consistently applied across the codebase. Sugoi!All instantiations of
Sql::new
now include the newHashMap
parameter, either as&HashMap::new()
in test files or as&erc_contracts
in the main application. This consistency ensures that the change is properly implemented throughout the project.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all instantiations of the `Sql` object include the new `HashMap` parameter. # Test: Search for the `Sql::new` method usage. Expect: All occurrences to include the `HashMap` parameter. rg --type rust $'Sql::new\(' -A 1Length of output: 2776
crates/torii/core/src/engine.rs (9)
Line range hint
204-226
: Ohayo sensei! The changes tofetch_data
look good to me.The function correctly handles fetching data for different scenarios based on the cursor head and latest block number. The logic is sound and the implementation is clean.
Line range hint
374-388
: Ohayo sensei! Theprocess
function looks good to me.It correctly dispatches the processing based on the type of fetched data. The logic is straightforward and easy to understand.
Line range hint
390-452
: Ohayo sensei! Theprocess_pending
function looks solid.It correctly handles the processing of pending transactions, skipping the ones that have already been processed based on the cursor. The error handling for processing pending transactions is handled appropriately, with specific handling for the "TransactionHashNotFound" error.
The function also processes parallelized events and updates the database with the processed data.
Line range hint
454-493
: Ohayo sensei! Theprocess_range
function looks great.It correctly processes transactions and blocks for a given range, handling the associated events and updating the database accordingly. Processing parallelized events and updating the database with the processed data is also handled correctly.
The database is executed in batches, which is a good performance optimization.
Line range hint
495-534
: Ohayo sensei! Theprocess_tasks
function is looking good.Using a semaphore to limit the number of concurrent tasks based on the configuration is a great way to control the parallelization. Spawning a task for each batch of parallelized events and processing them concurrently significantly improves performance.
The error handling for processing parallelized events is handled correctly, with errors being logged. Merging the results from each task into the main database ensures data consistency.
Line range hint
585-621
: Ohayo sensei! Theprocess_transaction_with_receipt
function looks solid.It correctly processes each event associated with the transaction if it belongs to the world contract or ERC tokens. Updating the cursor map for each processed event ensures proper tracking of the event processing progress.
Calling the
process_event
function to process each event with the necessary parameters is handled correctly.
Line range hint
667-749
: Ohayo sensei! Theprocess_event
function is looking great.It correctly retrieves the appropriate event processors based on the event key and falls back to the catch-all event processor if no specific processor is found. This ensures that all events are handled, even if they don't have a dedicated processor.
Parallelizing the processing of certain events based on a task identifier is a smart optimization for improving performance for specific event types.
Processing the event using the retrieved event processors and handling errors with appropriate logging is implemented correctly.
752-777
: Ohayo sensei! Theget_all_events
function looks great.It correctly fetches events in chunks using pagination until all events are retrieved. Using a loop to continuously fetch event pages until the continuation token is
None
ensures that all events are fetched.Returning the contract address and the list of fetched event pages provides the necessary data for further processing.
112-113
: Ohayo sensei! The changes to theEngine
struct look good.Adding the
tokens
field to store the ERC tokens to index is a useful addition. It allows the engine to focus on specific tokens of interest.The struct contains all the necessary fields and provides methods for the engine to function properly, including starting the engine, fetching data, processing data, and handling various types of events and transactions.
crates/torii/libp2p/src/tests.rs (1)
Line range hint
527-563
: Ohayo sensei! The changes to thetest_client_messaging
function look great!The test function thoroughly verifies the client messaging functionality by setting up the necessary components, registering the
Message
model, publishing a signed message, and waiting for it to be received within a timeout.A few additional insights:
- The use of an in-memory SQLite database ensures a clean and isolated testing environment.
- The Katana sequencer and JSON-RPC client provide a realistic setup for testing the messaging system.
- The test covers the essential aspects of message publishing, subscription, and reception.
- The timeout mechanism ensures that the test fails if the message is not received within the expected time frame.
Overall, this test function enhances the reliability and correctness of the client messaging system. Keep up the great work, sensei!
crates/torii/core/src/sql/mod.rs (2)
37-37
: Ohayo sensei! Test module path update is acceptableChanging the test module's path to
test.rs
improves consistency. Ensure that all tests are still recognized and executed by the test runner.
125-132
: Ohayo sensei! Update all calls tohead()
to match the new signatureThe
head
method now requires acontract
parameter. Ensure that all calls to this method throughout the codebase have been updated to include the new parameter to prevent potential runtime errors.To ensure that all calls are updated, you can run the following script:
Verification successful
Ohayo sensei! Let's dig a bit deeper to make sure we're not missing anything. I'll run a couple more searches to verify the
head()
method calls.
Ohayo sensei! I've found some interesting information, but let's double-check to make sure we're not missing anything important.
Ohayo sensei! After our thorough investigation, I have some good news for you!
The
head()
method call has been correctly updatedWe found one occurrence of the
head()
method being called incrates/torii/core/src/engine.rs
, and it's already using the correct signature with a parameter:let (head, _, _) = self.db.head(self.world.address).await?;This call matches the new signature we were looking for. We didn't find any other calls to the
head()
method in the codebase that would need updating.However, I couldn't locate the actual method definition for
head()
. While this is a bit unusual, it doesn't affect our conclusion about the method calls being up-to-date.To be extra thorough, you might want to manually verify the
head()
method definition and check if there are any other calls to this method that our search might have missed. But based on our findings, it appears that the necessary updates have been made.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all calls to `head()` that need to be updated. rg --type rust $'\.head\(\)'Length of output: 29
Script:
#!/bin/bash # Search for any occurrences of 'head' as a method call rg --type rust '\w+\.head\b' # Search for the 'head' method definition ast-grep --lang rust --pattern 'fn head($$$) $$$'Length of output: 605
Script:
#!/bin/bash # Check the context of the head() call we found rg --type rust -C 5 'self\.db\.head\(self\.world\.address\)' # Search for the head method definition with a more specific pattern ast-grep --lang rust --pattern 'pub async fn head(&self, contract: Felt) -> Result<$$$>'Length of output: 966
83ac6d1
to
3414843
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (26)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (24 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/processors/store_set_record.rs (1 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (9 hunks)
- crates/torii/core/src/sql/query_queue.rs (1 hunks)
- crates/torii/core/src/sql/test.rs (5 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (3 hunks)
- crates/torii/graphql/src/tests/mod.rs (3 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (6 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (3 hunks)
- crates/torii/libp2p/src/server/mod.rs (1 hunks)
- crates/torii/libp2p/src/tests.rs (2 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files not reviewed due to no reviewable changes (1)
- crates/torii/core/src/lib.rs
Files skipped from review due to trivial changes (1)
- bin/torii/torii.toml
Files skipped from review as they are similar to previous changes (22)
- crates/torii/core/Cargo.toml
- crates/torii/core/src/processors/erc20_legacy_transfer.rs
- crates/torii/core/src/processors/erc20_transfer.rs
- crates/torii/core/src/processors/erc721_transfer.rs
- crates/torii/core/src/processors/mod.rs
- crates/torii/core/src/processors/store_set_record.rs
- crates/torii/core/src/sql/erc.rs
- crates/torii/core/src/sql/mod.rs
- crates/torii/core/src/sql/query_queue.rs
- crates/torii/core/src/sql/test.rs
- crates/torii/core/src/sql/utils.rs
- crates/torii/core/src/types.rs
- crates/torii/graphql/src/tests/metadata_test.rs
- crates/torii/graphql/src/tests/mod.rs
- crates/torii/graphql/src/tests/subscription_test.rs
- crates/torii/grpc/src/server/tests/entities_test.rs
- crates/torii/libp2p/src/server/mod.rs
- crates/torii/libp2p/src/tests.rs
- crates/torii/migrations/20240913104418_add_erc.sql
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql
- scripts/deploy_erc20_katana.sh
- scripts/send_erc20_transfer.sh
Additional comments not posted (12)
crates/torii/core/src/engine.rs (12)
9-9
: Ohayo sensei! Thejoin_all
import looks good to me.This import is necessary for the changes made in the
fetch_range
function to concurrently fetch events from multiple contracts.
12-14
: Ohayo sensei! The additional imports fromstarknet::core::types
are looking good.These imports are necessary for the changes made to the
Engine
struct and related functions to support indexing ERC tokens.
26-27
: Ohayo sensei! The imports fromcrate::sql
andcrate::types
are looking good.These imports are necessary for the changes made to the
Engine
struct and related functions to support indexing ERC tokens and managing database cursors.
33-33
: Ohayo sensei! The change to theevent
field in theProcessors
struct is a great enhancement.Allowing multiple event processors to be associated with a single event key improves the flexibility of event handling. This enables more complex processing scenarios where different processors can handle the same event type in different ways.
82-82
: Ohayo sensei! The comment about theLinkedList
order is a helpful addition.The comment clarifies that the blocks in the
LinkedList
might not be in a specific order. This information is valuable for developers working with theFetchRangeResult
struct to set the right expectations about the block order.
112-113
: Ohayo sensei! The addition of thetokens
field to theEngine
struct is a key change.The
tokens
field allows the engine to store and manage the ERC tokens that need to be indexed. This is a fundamental requirement for the new functionality of indexing specific ERC tokens.
Line range hint
123-132
: Ohayo sensei! The modification to theEngine
constructor to accept thetokens
parameter is a necessary change.By accepting the
tokens
parameter, the constructor allows the caller to provide the ERC tokens that need to be indexed. This ensures that the engine is initialized with the correct set of tokens to index.
Line range hint
204-298
: Ohayo sensei! The changes in thefetch_range
function to fetch events concurrently are a significant performance improvement.By creating separate tasks for fetching events from each contract and using
join_all
to execute them concurrently, the function can efficiently retrieve events from multiple contracts in parallel. This can greatly reduce the overall time required to fetch events, especially when dealing with a large number of contracts.The code segment also handles the processing of the fetched events, ensuring that already processed events are filtered out based on the last processed transaction for each contract. This prevents duplicate processing of events.
Overall, these changes enhance the performance and correctness of the event fetching process.
Line range hint
397-448
: Ohayo sensei! The modifications to theprocess_pending
function enhance the processing of pending transactions and their events.The function now iterates over the transactions in the pending block and processes each transaction using the
process_transaction_with_receipt
function. This ensures that each transaction is properly handled and its events are processed individually.The
last_pending_block_tx
andcursor_map
variables are updated based on the processed transactions, allowing the function to keep track of the last processed transaction and the cursor for each contract. This is important for maintaining the correct state and avoiding duplicate processing.Error handling is also implemented to handle scenarios where a transaction receipt is not found. In such cases, the function sets the appropriate cursors and returns early, preventing the processing from being halted.
These changes improve the robustness and reliability of the pending transaction processing logic.
Line range hint
457-478
: Ohayo sensei! The updates to theprocess_range
function enhance the processing of blocks and their transactions.The function now iterates over the transactions in the
data.transactions
map and processes each transaction using theprocess_transaction_with_events
function. This ensures that each transaction and its associated events are properly handled.Additionally, the function processes each block using the
process_block
function. To avoid processing the same block multiple times, it maintains aprocessed_blocks
set. This optimization prevents unnecessary duplicate processing of blocks.If the
block_tx
channel is provided, the function sends the block number through the channel. This allows other parts of the system to be notified about the processed blocks, enabling further actions or synchronization based on the processed blocks.These changes improve the efficiency and correctness of the block and transaction processing logic.
489-489
: Ohayo sensei! The modification to reset the cursors in theprocess_range
function is a necessary update.After processing a range of blocks, it is important to reset the cursors to reflect the latest processed block. By calling the
reset_cursors
method of thedb
object with thelatest_block_number
, the function ensures that the cursors are properly updated.This change maintains the consistency between the actual progress of block processing and the cursors stored in the database. It prevents any discrepancies and keeps the system in a valid state.
512-521
: Ohayo sensei! The updates to theprocess_tasks
function for handling parallelized events are a great addition.Processing parallelized events allows for efficient utilization of system resources by leveraging multiple event processors concurrently. This can significantly improve the throughput and performance of event processing.
The function iterates over the
processors.event
map to retrieve the event processors associated with each event key. This ensures that each event is processed by its corresponding event processors, enabling customized processing logic for different event types.By calling the
process
method on each event processor, the function triggers the specific processing logic defined for each event type. This flexibility allows for tailored handling of events based on their characteristics.Furthermore, the function logs any errors that occur during event processing using the
error
macro. This aids in identifying and debugging issues, making it easier to diagnose and fix problems in the event processing pipeline.Overall, these changes enhance the efficiency, customization, and observability of parallelized event processing.
bin/torii/src/main.rs
Outdated
let contract_address = Felt::from_str(address).unwrap(); | ||
let start_block = start_block.parse::<u64>()?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! Let's handle those parsing errors gracefully.
Using unwrap
when parsing the contract address can cause the application to panic if an invalid address is provided. It's better to propagate the error using the ?
operator for a smoother experience.
Here's a suggestion:
- let contract_address = Felt::from_str(address).unwrap();
+ let contract_address = Felt::from_str(address)?;
Apply the same change to line 365 for consistency.
Also applies to: 365-366
Usually, ERC1155 are used as consumables. A single token can have multiple copies owned by multiple accounts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now due to legacy format of the tokens, we don't know if it's ERC721 or ERC20 for the transfer as only the selector of the event is used.
We should see how complex it is to leverage the token type (world/erc20/erc721
) to actually scope the map of processors.
and don't forget that |
@rsodre thanks for pointing this out, we already handle this. |
3414843
to
49e97bd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Outside diff range and nitpick comments (6)
crates/torii/core/src/engine.rs (2)
Line range hint
706-783
: Ohayo, sensei! Nice improvement, but let's make it more flexible!The changes to handle multiple processors per event key and the introduction of task identifiers for parallelization are great improvements. However, the task identifier calculation seems to be hardcoded for specific event types.
Consider implementing a more generic approach for task identification:
-let task_identifier = if processors.len() == 1 { - match processors[0].event_key().as_str() { - "StoreSetRecord" | "StoreUpdateRecord" | "StoreUpdateMember" | "StoreDelRecord" => { - let mut hasher = DefaultHasher::new(); - event.data[0].hash(&mut hasher); - event.data[1].hash(&mut hasher); - hasher.finish() - } - _ => 0, - } -} else { - 0 -}; +let task_identifier = if let Some(processor) = processors.first() { + processor.calculate_task_identifier(event) +} else { + 0 +};This approach would require adding a
calculate_task_identifier
method to theEventProcessor
trait, allowing each processor to define its own logic for task identification.
Line range hint
1-816
: Ohayo, sensei! Overall, these changes are sugoi!The implementation of indexing for whitelisted ERC20 and ERC721 tokens, along with the improvements in event processing and concurrency, align perfectly with the PR objectives. These changes should significantly enhance the Torii framework's capabilities and performance.
To further improve the architecture, consider implementing a configuration system for ERC token indexing. This could allow for easier addition or removal of supported token standards in the future, including ERC-1155 as suggested in the PR comments.
For example, you could create an
ErcIndexingConfig
struct:pub struct ErcIndexingConfig { pub erc20_enabled: bool, pub erc721_enabled: bool, pub erc1155_enabled: bool, // Add more fields for future token standards }This configuration could be passed to the
Engine
constructor, allowing for more flexible and extensible token indexing in the future.crates/torii/core/src/sql/mod.rs (4)
49-55
: Ohayo sensei! NewCursors
struct looks useful, but needs docs!The addition of the
Cursors
struct is a good way to manage indexing state. It seems well-designed for handling multiple contracts or tokens.Consider adding documentation to explain the purpose and usage of the
Cursors
struct. For example:/// Represents the current state of indexing operations. /// /// `cursor_map`: Maps contract addresses to their last processed transaction. /// `last_pending_block_tx`: The last pending transaction in the current block. /// `head`: The current head of the chain being processed. #[derive(Debug, Clone)] pub struct Cursors { // ... (existing fields) }
Line range hint
68-98
: Ohayo sensei! Great addition for ERC contract handling!The modification of the
new
function to accept and processerc_contracts
is an excellent enhancement that supports the PR's objective. The logic for inserting ERC contracts into the database is well-implemented.Consider using
insert_or_ignore_contract
helper function to reduce code duplication:fn insert_or_ignore_contract(&mut self, contract_address: Felt, contract_type: &str) { self.query_queue.enqueue( "INSERT OR IGNORE INTO contracts (id, contract_address, contract_type) VALUES (?, ?, ?)", vec![ Argument::FieldElement(contract_address), Argument::FieldElement(contract_address), Argument::String(contract_type.to_string()), ], QueryType::Other, ); } // Usage: self.insert_or_ignore_contract(world_address, WORLD_CONTRACT_TYPE); for contract in erc_contracts.values() { self.insert_or_ignore_contract(contract.contract_address, &contract.r#type.to_string()); }This refactoring would make the code more DRY and easier to maintain.
Line range hint
125-141
: Ohayo sensei! Good update tohead
function, but let's improve error handling!The modification of the
head
function to accept acontract
parameter is a great improvement. It allows for more flexible querying of contract-specific information, which is crucial for handling multiple ERC contracts.However, the error handling could be improved. Consider replacing the
expect
calls with proper error handling:pub async fn head(&self, contract: Felt) -> Result<(u64, Option<Felt>, Option<Felt>)> { // ... (existing code) Ok(( indexer.0.map(|h| h.try_into().map_err(|_| anyhow!("Head value doesn't fit in u64")))?.unwrap_or(0), indexer.1.map(|f| Felt::from_str(&f)).transpose()?, indexer.2.map(|f| Felt::from_str(&f)).transpose()?, )) }This change will propagate errors instead of panicking, making the function more robust and easier to debug.
Line range hint
143-185
: Ohayo sensei! New setter functions look great, but let's handle errors better!The addition of
set_head
,set_last_pending_block_contract_tx
, andset_last_pending_block_tx
functions provides a clean way to update contract state in the database. Using the query queue for these operations is a good choice for performance.However, the use of
expect
in these functions could lead to panics. Consider usingResult
for better error handling:pub fn set_head(&mut self, contract: Felt, head: u64) -> Result<()> { let head = Argument::Int(head.try_into().map_err(|_| anyhow!("Head value doesn't fit in u64"))?); let id = Argument::FieldElement(contract); self.query_queue.enqueue( "UPDATE contracts SET head = ? WHERE id = ?", vec![head, id], QueryType::Other, ); Ok(()) }Apply similar changes to the other new functions. This approach will make the functions more robust and easier to use in error-handling contexts.
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (26)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (24 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/processors/store_set_record.rs (1 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (9 hunks)
- crates/torii/core/src/sql/query_queue.rs (1 hunks)
- crates/torii/core/src/sql/test.rs (5 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (3 hunks)
- crates/torii/graphql/src/tests/mod.rs (3 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (6 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (3 hunks)
- crates/torii/libp2p/src/server/mod.rs (1 hunks)
- crates/torii/libp2p/src/tests.rs (2 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files not reviewed due to no reviewable changes (1)
- crates/torii/core/src/lib.rs
Files skipped from review as they are similar to previous changes (23)
- bin/torii/src/main.rs
- bin/torii/torii.toml
- crates/torii/core/Cargo.toml
- crates/torii/core/src/processors/erc20_legacy_transfer.rs
- crates/torii/core/src/processors/erc20_transfer.rs
- crates/torii/core/src/processors/erc721_transfer.rs
- crates/torii/core/src/processors/mod.rs
- crates/torii/core/src/processors/store_set_record.rs
- crates/torii/core/src/sql/erc.rs
- crates/torii/core/src/sql/query_queue.rs
- crates/torii/core/src/sql/test.rs
- crates/torii/core/src/sql/utils.rs
- crates/torii/core/src/types.rs
- crates/torii/graphql/src/tests/metadata_test.rs
- crates/torii/graphql/src/tests/mod.rs
- crates/torii/graphql/src/tests/subscription_test.rs
- crates/torii/grpc/src/server/tests/entities_test.rs
- crates/torii/libp2p/src/server/mod.rs
- crates/torii/libp2p/src/tests.rs
- crates/torii/migrations/20240913104418_add_erc.sql
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql
- scripts/deploy_erc20_katana.sh
- scripts/send_erc20_transfer.sh
Additional comments not posted (5)
crates/torii/core/src/engine.rs (3)
124-124
: Sugoi change, sensei! LGTM!The addition of the
tokens
field to theEngine
struct aligns perfectly with the PR objective of implementing indexing for whitelisted ERC20 and ERC721 tokens. This will enable efficient tracking of these tokens within the Torii framework.
Line range hint
134-143
: Ohayo, sensei! This change looks great!The update to the
Engine::new
method signature to include thetokens
parameter is consistent with the addition of thetokens
field to theEngine
struct. This ensures that the ERC token information is properly initialized when creating a newEngine
instance.
34-34
: Ohayo, sensei! LGTM, but let's keep an eye on performance.The change to allow multiple event processors per event key is a good improvement in flexibility. However, we should monitor the performance impact of using
Vec<Box<dyn EventProcessor<P>>>
instead ofArc<dyn EventProcessor<P>>
.Verification successful
Ohayo, sensei!
The previous search didn't find any performance-related comments. To ensure there are no hidden performance issues, let's investigate the usage patterns of
Box<dyn EventProcessor<P>>
.
Ohayo, sensei!
The search didn't find any performance-related TODOs, and the use of
Box<dyn EventProcessor<P>>
is consistent across the codebase. This change enhances flexibility without introducing immediate performance issues. Let's keep an eye on performance as the system evolves.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any performance-related comments or TODOs rg --type rust "TODO.*performance|PERF:" crates/torii/core/src/Length of output: 65
Script:
#!/bin/bash # Search for usage patterns that might impact performance rg --type rust 'Box<dyn\s+EventProcessor<P>>' crates/torii/core/src/Length of output: 473
crates/torii/core/src/sql/mod.rs (2)
Line range hint
1-24
: Ohayo sensei! New imports align with PR objective.The addition of
HashMap
andErcContract
imports, along with the expandedQueryQueue
import, nicely supports the PR's goal of implementing indexing for whitelisted ERC20 and ERC721 tokens. These changes set the stage for the new functionality.
34-39
: Ohayo sensei! Newerc
module added, don't forget tests!The addition of the
erc
module is great and aligns with the PR objective. The test module path update is also noted.To ensure proper test coverage for the new
erc
module, please run the following script:
crates/torii/core/src/engine.rs
Outdated
async fn get_all_events<P>( | ||
provider: &P, | ||
events_filter: EventFilter, | ||
events_chunk_size: u64, | ||
) -> Result<(Option<Felt>, Vec<EventsPage>)> | ||
where | ||
P: Provider + Sync, | ||
{ | ||
let mut events_pages = Vec::new(); | ||
let mut continuation_token = None; | ||
|
||
loop { | ||
let events_page = provider | ||
.get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) | ||
.await?; | ||
|
||
continuation_token = events_page.continuation_token.clone(); | ||
events_pages.push(events_page); | ||
|
||
if continuation_token.is_none() { | ||
break; | ||
} | ||
} | ||
|
||
Ok((events_filter.address, events_pages)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sugoi pagination handling, sensei! Let's add some robustness!
The get_all_events
function effectively handles event pagination, which is crucial for processing large numbers of events. However, we can improve it with some additional error handling and logging.
Consider adding error handling and logging:
async fn get_all_events<P>(
provider: &P,
events_filter: EventFilter,
events_chunk_size: u64,
) -> Result<(Option<Felt>, Vec<EventsPage>)>
where
P: Provider + Sync,
{
let mut events_pages = Vec::new();
let mut continuation_token = None;
+ let mut attempt = 0;
+ const MAX_ATTEMPTS: u32 = 3;
loop {
- let events_page = provider
- .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size)
- .await?;
+ let events_page = match provider
+ .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size)
+ .await
+ {
+ Ok(page) => page,
+ Err(e) if attempt < MAX_ATTEMPTS => {
+ warn!(target: LOG_TARGET, error = %e, attempt = attempt, "Error fetching events, retrying");
+ attempt += 1;
+ continue;
+ }
+ Err(e) => return Err(e.into()),
+ };
+
+ attempt = 0;
continuation_token = events_page.continuation_token.clone();
events_pages.push(events_page);
if continuation_token.is_none() {
+ info!(target: LOG_TARGET, "Fetched all events for filter: {:?}", events_filter);
break;
}
}
Ok((events_filter.address, events_pages))
}
This modification adds retry logic for transient errors and improves logging, making the function more robust and easier to debug.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async fn get_all_events<P>( | |
provider: &P, | |
events_filter: EventFilter, | |
events_chunk_size: u64, | |
) -> Result<(Option<Felt>, Vec<EventsPage>)> | |
where | |
P: Provider + Sync, | |
{ | |
let mut events_pages = Vec::new(); | |
let mut continuation_token = None; | |
loop { | |
let events_page = provider | |
.get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) | |
.await?; | |
continuation_token = events_page.continuation_token.clone(); | |
events_pages.push(events_page); | |
if continuation_token.is_none() { | |
break; | |
} | |
} | |
Ok((events_filter.address, events_pages)) | |
} | |
async fn get_all_events<P>( | |
provider: &P, | |
events_filter: EventFilter, | |
events_chunk_size: u64, | |
) -> Result<(Option<Felt>, Vec<EventsPage>)> | |
where | |
P: Provider + Sync, | |
{ | |
let mut events_pages = Vec::new(); | |
let mut continuation_token = None; | |
let mut attempt = 0; | |
const MAX_ATTEMPTS: u32 = 3; | |
loop { | |
let events_page = match provider | |
.get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) | |
.await | |
{ | |
Ok(page) => page, | |
Err(e) if attempt < MAX_ATTEMPTS => { | |
warn!(target: LOG_TARGET, error = %e, attempt = attempt, "Error fetching events, retrying"); | |
attempt += 1; | |
continue; | |
} | |
Err(e) => return Err(e.into()), | |
}; | |
attempt = 0; | |
continuation_token = events_page.continuation_token.clone(); | |
events_pages.push(events_page); | |
if continuation_token.is_none() { | |
info!(target: LOG_TARGET, "Fetched all events for filter: {:?}", events_filter); | |
break; | |
} | |
} | |
Ok((events_filter.address, events_pages)) | |
} |
crates/torii/core/src/engine.rs
Outdated
let mut fetch_all_events_tasks = vec![]; | ||
let world_events_pages = | ||
get_all_events(&self.provider, world_events_filter, self.config.events_chunk_size); | ||
|
||
while let Some(token) = &events_pages.last().unwrap().continuation_token { | ||
debug!(target: LOG_TARGET, "Fetching events page with continuation token: {}", &token); | ||
events_pages.push(get_events(Some(token.clone())).await?); | ||
} | ||
fetch_all_events_tasks.push(world_events_pages); | ||
|
||
debug!(target: LOG_TARGET, "Total events pages fetched: {}", &events_pages.len()); | ||
// Transactions & blocks to process | ||
let mut last_block = 0_u64; | ||
let mut blocks = BTreeMap::new(); | ||
for token in self.tokens.iter() { | ||
let events_filter = EventFilter { | ||
from_block: Some(BlockId::Number(from)), | ||
to_block: Some(BlockId::Number(to)), | ||
address: Some(*token.0), | ||
keys: None, | ||
}; | ||
let token_events_pages = | ||
get_all_events(&self.provider, events_filter, self.config.events_chunk_size); | ||
|
||
// Flatten events pages and events according to the pending block cursor | ||
// to array of (block_number, transaction_hash) | ||
let mut last_pending_block_world_tx_cursor = last_pending_block_world_tx; | ||
let mut transactions = LinkedHashMap::new(); | ||
for events_page in events_pages { | ||
debug!("Processing events page with events: {}", &events_page.events.len()); | ||
for event in events_page.events { | ||
let block_number = match event.block_number { | ||
Some(block_number) => block_number, | ||
// If the block number is not present, try to fetch it from the transaction | ||
// receipt Should not/rarely happen. Thus the additional | ||
// fetch is acceptable. | ||
None => { | ||
let TransactionReceiptWithBlockInfo { receipt, block } = | ||
self.provider.get_transaction_receipt(event.transaction_hash).await?; | ||
|
||
match receipt { | ||
TransactionReceipt::Invoke(_) | TransactionReceipt::L1Handler(_) => { | ||
if let ReceiptBlock::Block { block_number, .. } = block { | ||
block_number | ||
} else { | ||
// If the block is pending, we assume the block number is the | ||
// latest + 1 | ||
to + 1 | ||
} | ||
} | ||
fetch_all_events_tasks.push(token_events_pages); | ||
} | ||
|
||
_ => to + 1, | ||
let task_result = join_all(fetch_all_events_tasks).await; | ||
|
||
let mut events = vec![]; | ||
|
||
for result in task_result { | ||
let result = result?; | ||
let contract_address = | ||
result.0.expect("EventFilters that we use always have an address"); | ||
let events_pages = result.1; | ||
let last_contract_tx = cursor_map.get(&contract_address).cloned(); | ||
let mut last_contract_tx_tmp = last_contract_tx; | ||
debug!(target: LOG_TARGET, "Total events pages fetched for contract ({:#x}): {}", &contract_address, &events_pages.len()); | ||
|
||
for events_page in events_pages { | ||
debug!("Processing events page with events: {}", &events_page.events.len()); | ||
for event in events_page.events { | ||
// Then we skip all transactions until we reach the last pending processed | ||
// transaction (if any) | ||
if let Some(last_contract_tx) = last_contract_tx_tmp { | ||
if event.transaction_hash != last_contract_tx { | ||
continue; | ||
} | ||
|
||
last_contract_tx_tmp = None; | ||
} | ||
}; | ||
|
||
// Keep track of last block number and fetch block timestamp | ||
if block_number > last_block { | ||
let block_timestamp = self.get_block_timestamp(block_number).await?; | ||
blocks.insert(block_number, block_timestamp); | ||
// Skip the latest pending block transaction events | ||
// * as we might have multiple events for the same transaction | ||
if let Some(last_contract_tx) = last_contract_tx { | ||
if event.transaction_hash == last_contract_tx { | ||
continue; | ||
} | ||
} | ||
|
||
last_block = block_number; | ||
events.push(event); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sugoi optimization, sensei! But let's make it even better!
The concurrent fetching of events for both the world contract and ERC tokens is a great performance improvement. However, we should consider enhancing the error handling for individual tasks.
Consider refactoring the error handling to process results individually:
-let task_result = join_all(fetch_all_events_tasks).await;
-
-let mut events = vec![];
-
-for result in task_result {
- let result = result?;
+let task_results = join_all(fetch_all_events_tasks).await;
+
+let mut events = vec![];
+
+for result in task_results {
+ let result = match result {
+ Ok(r) => r,
+ Err(e) => {
+ error!(target: LOG_TARGET, error = %e, "Error fetching events");
+ continue;
+ }
+ };
// ... rest of the code
This way, if one task fails, it won't prevent processing events from other successful tasks.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let mut fetch_all_events_tasks = vec![]; | |
let world_events_pages = | |
get_all_events(&self.provider, world_events_filter, self.config.events_chunk_size); | |
while let Some(token) = &events_pages.last().unwrap().continuation_token { | |
debug!(target: LOG_TARGET, "Fetching events page with continuation token: {}", &token); | |
events_pages.push(get_events(Some(token.clone())).await?); | |
} | |
fetch_all_events_tasks.push(world_events_pages); | |
debug!(target: LOG_TARGET, "Total events pages fetched: {}", &events_pages.len()); | |
// Transactions & blocks to process | |
let mut last_block = 0_u64; | |
let mut blocks = BTreeMap::new(); | |
for token in self.tokens.iter() { | |
let events_filter = EventFilter { | |
from_block: Some(BlockId::Number(from)), | |
to_block: Some(BlockId::Number(to)), | |
address: Some(*token.0), | |
keys: None, | |
}; | |
let token_events_pages = | |
get_all_events(&self.provider, events_filter, self.config.events_chunk_size); | |
// Flatten events pages and events according to the pending block cursor | |
// to array of (block_number, transaction_hash) | |
let mut last_pending_block_world_tx_cursor = last_pending_block_world_tx; | |
let mut transactions = LinkedHashMap::new(); | |
for events_page in events_pages { | |
debug!("Processing events page with events: {}", &events_page.events.len()); | |
for event in events_page.events { | |
let block_number = match event.block_number { | |
Some(block_number) => block_number, | |
// If the block number is not present, try to fetch it from the transaction | |
// receipt Should not/rarely happen. Thus the additional | |
// fetch is acceptable. | |
None => { | |
let TransactionReceiptWithBlockInfo { receipt, block } = | |
self.provider.get_transaction_receipt(event.transaction_hash).await?; | |
match receipt { | |
TransactionReceipt::Invoke(_) | TransactionReceipt::L1Handler(_) => { | |
if let ReceiptBlock::Block { block_number, .. } = block { | |
block_number | |
} else { | |
// If the block is pending, we assume the block number is the | |
// latest + 1 | |
to + 1 | |
} | |
} | |
fetch_all_events_tasks.push(token_events_pages); | |
} | |
_ => to + 1, | |
let task_result = join_all(fetch_all_events_tasks).await; | |
let mut events = vec![]; | |
for result in task_result { | |
let result = result?; | |
let contract_address = | |
result.0.expect("EventFilters that we use always have an address"); | |
let events_pages = result.1; | |
let last_contract_tx = cursor_map.get(&contract_address).cloned(); | |
let mut last_contract_tx_tmp = last_contract_tx; | |
debug!(target: LOG_TARGET, "Total events pages fetched for contract ({:#x}): {}", &contract_address, &events_pages.len()); | |
for events_page in events_pages { | |
debug!("Processing events page with events: {}", &events_page.events.len()); | |
for event in events_page.events { | |
// Then we skip all transactions until we reach the last pending processed | |
// transaction (if any) | |
if let Some(last_contract_tx) = last_contract_tx_tmp { | |
if event.transaction_hash != last_contract_tx { | |
continue; | |
} | |
last_contract_tx_tmp = None; | |
} | |
}; | |
// Keep track of last block number and fetch block timestamp | |
if block_number > last_block { | |
let block_timestamp = self.get_block_timestamp(block_number).await?; | |
blocks.insert(block_number, block_timestamp); | |
// Skip the latest pending block transaction events | |
// * as we might have multiple events for the same transaction | |
if let Some(last_contract_tx) = last_contract_tx { | |
if event.transaction_hash == last_contract_tx { | |
continue; | |
} | |
} | |
last_block = block_number; | |
events.push(event); | |
} | |
} | |
} | |
let mut fetch_all_events_tasks = vec![]; | |
let world_events_pages = | |
get_all_events(&self.provider, world_events_filter, self.config.events_chunk_size); | |
fetch_all_events_tasks.push(world_events_pages); | |
for token in self.tokens.iter() { | |
let events_filter = EventFilter { | |
from_block: Some(BlockId::Number(from)), | |
to_block: Some(BlockId::Number(to)), | |
address: Some(*token.0), | |
keys: None, | |
}; | |
let token_events_pages = | |
get_all_events(&self.provider, events_filter, self.config.events_chunk_size); | |
fetch_all_events_tasks.push(token_events_pages); | |
} | |
let task_results = join_all(fetch_all_events_tasks).await; | |
let mut events = vec![]; | |
for result in task_results { | |
let result = match result { | |
Ok(r) => r, | |
Err(e) => { | |
error!(target: LOG_TARGET, error = %e, "Error fetching events"); | |
continue; | |
} | |
}; | |
let contract_address = | |
result.0.expect("EventFilters that we use always have an address"); | |
let events_pages = result.1; | |
let last_contract_tx = cursor_map.get(&contract_address).cloned(); | |
let mut last_contract_tx_tmp = last_contract_tx; | |
debug!(target: LOG_TARGET, "Total events pages fetched for contract ({:#x}): {}", &contract_address, &events_pages.len()); | |
for events_page in events_pages { | |
debug!("Processing events page with events: {}", &events_page.events.len()); | |
for event in events_page.events { | |
// Then we skip all transactions until we reach the last pending processed | |
// transaction (if any) | |
if let Some(last_contract_tx) = last_contract_tx_tmp { | |
if event.transaction_hash != last_contract_tx { | |
continue; | |
} | |
last_contract_tx_tmp = None; | |
} | |
// Skip the latest pending block transaction events | |
// * as we might have multiple events for the same transaction | |
if let Some(last_contract_tx) = last_contract_tx { | |
if event.transaction_hash == last_contract_tx { | |
continue; | |
} | |
} | |
events.push(event); | |
} | |
} | |
} |
49e97bd
to
ade93c2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Outside diff range and nitpick comments (7)
bin/torii/src/main.rs (2)
161-178
: Ohayo sensei! The main function enhancements look solid!The new logic for handling ERC contracts and configuration file loading adds great flexibility to the application. The determination of the start block based on the minimum start block of ERC contracts is a smart approach to ensure comprehensive indexing.
However, we could improve the error handling when loading the configuration file:
Consider adding error context when loading the configuration file:
let mut config = if let Some(path) = args.config { - ToriiConfig::load_from_path(&path)? + ToriiConfig::load_from_path(&path).context("Failed to load configuration file")? } else { ToriiConfig::default() };This change will provide more informative error messages if the configuration file fails to load.
219-225
: Ohayo sensei! The Engine initialization looks more powerful now!The changes to include ERC contract support in the
Sql::new
function and the addition of ERC20 and ERC721 transfer processors are excellent enhancements. These modifications will allow the application to handle a wider range of token transfers effectively.To improve code readability:
Consider extracting the processor initialization into a separate function:
fn create_processors() -> Processors { Processors { event: generate_event_processors_map(vec![ Box::new(RegisterModelProcessor), Box::new(StoreSetRecordProcessor), Box::new(MetadataUpdateProcessor), Box::new(StoreDelRecordProcessor), Box::new(EventMessageProcessor), Box::new(StoreUpdateRecordProcessor), Box::new(StoreUpdateMemberProcessor), Box::new(Erc20LegacyTransferProcessor), Box::new(Erc20TransferProcessor), Box::new(Erc721TransferProcessor), ])?, transaction: vec![Box::new(StoreTransactionProcessor)], ..Processors::default() } }Then, in the main function:
let processors = create_processors();This change will make the main function more concise and easier to read.
Also applies to: 229-238, 269-269
crates/torii/core/src/engine.rs (4)
Line range hint
215-309
: Ohayo, sensei! Impressive updates tofetch_data
andfetch_range
!The changes in these methods significantly enhance the engine's capabilities:
- Using
cursors.head
infetch_data
is a smart move, making it more versatile for handling multiple contracts.- The concurrent fetching of world and token events in
fetch_range
is a fantastic performance optimization!However, there's room for improvement in error handling:
Consider enhancing the error handling in the concurrent event fetching. Currently, if one task fails, it will cause the entire operation to fail. A more robust approach would be to collect successful results and log errors for failed tasks. Here's a suggested improvement:
-let task_result = join_all(fetch_all_events_tasks).await; - -let mut events = vec![]; - -for result in task_result { - let result = result?; +let task_results = join_all(fetch_all_events_tasks).await; + +let mut events = vec![]; + +for result in task_results { + match result { + Ok(result) => { let contract_address = result.0.expect("EventFilters that we use always have an address"); let events_pages = result.1; // ... (rest of the processing logic) + }, + Err(e) => { + error!(target: LOG_TARGET, error = %e, "Error fetching events for a contract"); + // Optionally, you could add a counter here to track failed tasks + } + } }This change will allow the engine to continue processing events from successful tasks even if some tasks fail, improving the overall robustness of the system.
Line range hint
408-459
: Ohayo, sensei! Nice updates toprocess_pending
!The changes in this method improve the engine's ability to handle multiple contracts:
- The introduction of
cursor_map
is a clever way to track the last processed transaction for each contract individually.- The updated transaction processing logic aligns well with the new cursor system.
However, there's a TODO comment that needs attention:
The TODO comment regarding the "TransactionHashNotFound" error suggests that this error handling might be obsolete. It's important to address this to ensure the code remains clean and efficient. Would you like me to create a GitHub issue to track the removal of this error handling code once it's confirmed to be no longer relevant?
Additionally, consider adding a comment explaining why the
TransactionHashNotFound
error is expected to no longer occur, to provide context for future maintainers.
Line range hint
611-655
: Ohayo, sensei! Sugoi updates toprocess_transaction_with_receipt
!The changes in this method effectively integrate the new multi-contract handling:
- The check for events from the world contract or tracked token contracts ensures that only relevant events are processed.
- Updating the
cursor_map
for each processed event maintains accurate tracking across different contracts.These updates align perfectly with the engine's new capabilities. However, there's a small optimization opportunity:
Consider moving the contract check outside the event loop to reduce redundant checks:
if let Some(events) = events { + let is_relevant_contract = event.from_address == self.world.address + || self.tokens.contains_key(&event.from_address); + + if is_relevant_contract { + cursor_map.insert(event.from_address, *transaction_hash); + } + for (event_idx, event) in events.iter().enumerate() { - if event.from_address != self.world.address - && !self.tokens.contains_key(&event.from_address) - { + if !is_relevant_contract { continue; } - cursor_map.insert(event.from_address, *transaction_hash); let event_id = format!("{:#064x}:{:#x}:{:#04x}", block_number, *transaction_hash, event_idx); // ... rest of the processing logic } }This change reduces the number of checks and map operations, potentially improving performance for transactions with multiple events.
Line range hint
706-783
: Ohayo, sensei! Impressive enhancements toprocess_event
!The updates to this method significantly improve the engine's event processing capabilities:
- Handling multiple processors for a single event type increases flexibility and extensibility.
- The introduction of task parallelization for specific event types is a clever performance optimization.
These changes greatly enhance the engine's ability to handle complex event processing scenarios. However, there's an opportunity for improvement:
The task identifier logic seems quite specific and hard-coded. Consider making it more generic and configurable:
- Create an enum for event types that require parallelization:
enum ParallelizableEvent { StoreSetRecord, StoreUpdateRecord, StoreUpdateMember, StoreDelRecord, // Add other parallelizable events here }
- Implement a method to get the task identifier based on the event type:
impl ParallelizableEvent { fn get_task_identifier(&self, event: &Event) -> u64 { let mut hasher = DefaultHasher::new(); match self { ParallelizableEvent::StoreSetRecord | ParallelizableEvent::StoreUpdateRecord | ParallelizableEvent::StoreUpdateMember | ParallelizableEvent::StoreDelRecord => { event.data[0].hash(&mut hasher); event.data[1].hash(&mut hasher); }, // Add logic for other parallelizable events } hasher.finish() } }
- Update the
process_event
method to use this new structure:- let task_identifier = if processors.len() == 1 { - match processors[0].event_key().as_str() { - "StoreSetRecord" | "StoreUpdateRecord" | "StoreUpdateMember" | "StoreDelRecord" => { - let mut hasher = DefaultHasher::new(); - event.data[0].hash(&mut hasher); - event.data[1].hash(&mut hasher); - hasher.finish() - } - _ => 0, - } - } else { - 0 - }; + let task_identifier = if let Some(parallelizable_event) = ParallelizableEvent::from_str(processors[0].event_key().as_str()) { + parallelizable_event.get_task_identifier(event) + } else { + 0 + };This approach makes the code more maintainable and easier to extend for future parallelizable event types.
crates/torii/core/src/sql/mod.rs (1)
Line range hint
68-98
: Ohayo sensei! Great enhancement to the Sql::new method!The addition of the
erc_contracts
parameter and the logic to insert ERC contracts into the database is a valuable improvement. This change will enable better handling and tracking of ERC contracts within the system.However, I have a small suggestion to improve error handling:
Consider using the
?
operator instead ofunwrap()
when executing the queries to propagate errors more gracefully. For example:self.query_queue.execute_all().await?;This change would make the error handling more robust and consistent with Rust best practices.
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (26)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (24 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/processors/store_set_record.rs (1 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (9 hunks)
- crates/torii/core/src/sql/query_queue.rs (1 hunks)
- crates/torii/core/src/sql/test.rs (5 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (3 hunks)
- crates/torii/graphql/src/tests/mod.rs (3 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (6 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (3 hunks)
- crates/torii/libp2p/src/server/mod.rs (1 hunks)
- crates/torii/libp2p/src/tests.rs (2 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files not reviewed due to no reviewable changes (1)
- crates/torii/core/src/lib.rs
Files skipped from review as they are similar to previous changes (22)
- bin/torii/torii.toml
- crates/torii/core/Cargo.toml
- crates/torii/core/src/processors/erc20_legacy_transfer.rs
- crates/torii/core/src/processors/erc20_transfer.rs
- crates/torii/core/src/processors/erc721_transfer.rs
- crates/torii/core/src/processors/mod.rs
- crates/torii/core/src/processors/store_set_record.rs
- crates/torii/core/src/sql/erc.rs
- crates/torii/core/src/sql/query_queue.rs
- crates/torii/core/src/sql/test.rs
- crates/torii/core/src/sql/utils.rs
- crates/torii/core/src/types.rs
- crates/torii/graphql/src/tests/metadata_test.rs
- crates/torii/graphql/src/tests/mod.rs
- crates/torii/graphql/src/tests/subscription_test.rs
- crates/torii/grpc/src/server/tests/entities_test.rs
- crates/torii/libp2p/src/server/mod.rs
- crates/torii/libp2p/src/tests.rs
- crates/torii/migrations/20240913104418_add_erc.sql
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql
- scripts/deploy_erc20_katana.sh
- scripts/send_erc20_transfer.sh
Additional comments not posted (11)
bin/torii/src/main.rs (2)
35-37
: Ohayo sensei! These imports and struct modifications look great!The new imports for ERC-related processors and the modifications to the
Args
struct are well-aligned with the goal of adding support for ERC contracts. The changes are structured logically and integrate smoothly with the existing codebase.Also applies to: 49-49, 148-155
Line range hint
1-391
: Ohayo sensei! Overall, these changes are a fantastic addition to the project!The modifications to add support for ERC contracts significantly enhance the application's capabilities. The new features include:
- Flexible configuration loading
- Support for ERC20 and ERC721 token transfers
- Dynamic start block determination based on ERC contracts
- A new parsing function for ERC contract information
These changes will greatly improve the indexing capabilities of the system, making it more versatile and powerful.
While there are some minor areas for improvement in error handling and code structure, the overall implementation is solid and well-integrated with the existing codebase.
Great work, sensei! These enhancements will undoubtedly provide significant value to the project.
crates/torii/core/src/engine.rs (5)
Line range hint
1-15
: Ohayo, sensei! Sugoi improvements in concurrency and token handling!The changes to the import statements and struct definitions show a great improvement in the engine's capabilities:
- The addition of
join_all
fromfutures_util
suggests enhanced concurrent processing.- Modifying
Processors
to use aVec<Box<dyn EventProcessor<P>>>
allows for multiple event processors per event type, increasing flexibility.- The new
tokens
field in theEngine
struct enables handling of ERC tokens, expanding the engine's functionality.These changes lay a solid foundation for the improvements in the rest of the file. Excellent work!
Also applies to: 27-28, 34-34, 123-124
Line range hint
134-153
: Ohayo, sensei! LGTM on theEngine::new
method changes!The updates to the
Engine::new
method are well-implemented:
- The addition of the
tokens
parameter aligns perfectly with the struct's new field.- Proper initialization of the
tokens
field in the struct creation ensures consistency.These changes seamlessly integrate the new ERC token functionality into the engine's initialization process. Great job!
Line range hint
468-505
: Ohayo, sensei! Excellent optimization inprocess_range
!The updates to this method show thoughtful improvements:
- The introduction of a
HashSet
to track processed blocks is a smart optimization. It efficiently prevents duplicate processing of blocks, which could happen in complex scenarios.- The updated logic for processing blocks integrates seamlessly with the new tracking system.
These changes contribute to the overall efficiency and robustness of the engine. Great work on optimizing this critical part of the processing pipeline!
Line range hint
1-816
: Ohayo, sensei! Overall, sugoi improvements to the engine!This review has covered significant changes to the
engine.rs
file, which have greatly enhanced the engine's capabilities:
- Support for handling multiple contracts (world and ERC tokens) has been added.
- Concurrent processing has been introduced in several areas, improving performance.
- The event processing system has been made more flexible with support for multiple processors per event type.
- New functionality for pagination and comprehensive event fetching has been implemented.
These changes represent a substantial improvement in the engine's functionality and efficiency. The code is generally well-structured and follows good practices.
There are a few areas where further improvements could be made:
- Enhancing error handling in concurrent operations.
- Generalizing some hard-coded logic for better maintainability.
- Adding more comprehensive logging for better debugging.
Overall, great job on these updates! The engine is now more powerful and flexible, setting a solid foundation for future enhancements.
791-816
: Ohayo, sensei! Nice addition of theget_all_events
function!This new function is a great abstraction for fetching all events for a given filter:
- It effectively handles event pagination, ensuring all events are fetched.
- Returning both the address and event pages is a nice touch, providing useful context to the caller.
However, there's room for some enhancements:
Consider adding error handling and logging to make the function more robust:
async fn get_all_events<P>( provider: &P, events_filter: EventFilter, events_chunk_size: u64, ) -> Result<(Option<Felt>, Vec<EventsPage>)> where P: Provider + Sync, { let mut events_pages = Vec::new(); let mut continuation_token = None; + let mut attempt = 0; + const MAX_ATTEMPTS: u32 = 3; loop { - let events_page = provider - .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) - .await?; + let events_page = match provider + .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) + .await + { + Ok(page) => page, + Err(e) if attempt < MAX_ATTEMPTS => { + warn!(target: LOG_TARGET, error = %e, attempt = attempt, "Error fetching events, retrying"); + attempt += 1; + continue; + } + Err(e) => return Err(e.into()), + }; + + attempt = 0; continuation_token = events_page.continuation_token.clone(); events_pages.push(events_page); if continuation_token.is_none() { + info!(target: LOG_TARGET, "Fetched all events for filter: {:?}", events_filter); break; } } Ok((events_filter.address, events_pages)) }These changes add retry logic for transient errors and improve logging, making the function more robust and easier to debug.
Likely invalid or redundant comment.
crates/torii/core/src/sql/mod.rs (4)
Line range hint
1-39
: Ohayo sensei! New imports and modules look good!The addition of new imports like
HashMap
andFromStr
, along with the newerc
andutils
modules, suggests an expansion of functionality related to ERC contracts and utility functions. These changes appear to be well-structured and aligned with the project's direction.
49-54
: Ohayo sensei! The new Cursors struct is a great addition!The
Cursors
struct provides a clear and organized way to keep track of cursor information, including a map of cursors, the last pending block transaction, and the head. This structure will likely improve the management of indexing state across multiple contracts.
Line range hint
125-141
: Ohayo sensei! The Sql::head method improvements are spot on!The addition of the
contract
parameter to thehead
method and its use in the SQL query is a great improvement. This change allows for more flexible querying of head information for specific contracts, rather than being limited to a fixed world address.
Line range hint
1-1180
: Ohayo sensei! Overall, these changes are a significant improvement to the SQL module!The modifications and additions to the
crates/torii/core/src/sql/mod.rs
file have greatly enhanced the functionality and flexibility of SQL operations in the project. Key improvements include:
- Better handling of ERC contracts
- More flexible querying with the updated
head
method- Comprehensive cursor management with the new
Cursors
struct and related methods- Improved performance potential with batch updates in
update_cursors
While these changes are generally excellent, there are opportunities to further improve error handling throughout the new methods. Implementing the suggested changes will make the code more robust and maintainable.
Great work on these enhancements, sensei! They will undoubtedly contribute to a more efficient and flexible system.
commit-id:5231a946
ade93c2
to
49af8d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Outside diff range and nitpick comments (11)
crates/torii/core/src/engine.rs (6)
123-124
: Ohayo, sensei! Sugoi addition of ERC token support!The new
tokens
field in theEngine
struct is a great addition for managing ERC tokens. Using aHashMap
allows for efficient lookup by token address.Consider adding a comment explaining the purpose of this field and what
ErcContract
represents for better code documentation.
253-309
: Ohayo, sensei! Impressive concurrent event fetching implementation!The addition of concurrent event fetching for both world events and ERC token events is a significant performance improvement. Using
join_all
to handle multiple futures is an excellent approach.Consider enhancing the error handling to make it more robust. Instead of using
?
on thejoin_all
result, which could cause the entire process to fail if a single task fails, you could handle errors individually for each task. Here's a suggested improvement:-let task_result = join_all(fetch_all_events_tasks).await; - -let mut events = vec![]; - -for result in task_result { - let result = result?; +let task_results = join_all(fetch_all_events_tasks).await; + +let mut events = vec![]; + +for result in task_results { + let result = match result { + Ok(r) => r, + Err(e) => { + error!(target: LOG_TARGET, error = %e, "Error fetching events for a contract"); + continue; + } + }; // ... rest of the codeThis way, if fetching events for one contract fails, it won't prevent processing events from other contracts.
Line range hint
611-655
: Sugoi update to event processing, sensei!The changes in
process_transaction_with_receipt
method effectively filter events to only process those from the world contract or tracked ERC tokens. Updating the cursor map for each processed event is crucial for maintaining the correct processing state.Consider moving the check for relevant events to the beginning of the loop to avoid unnecessary processing:
for (event_idx, event) in events.iter().enumerate() { + if event.from_address != self.world.address && !self.tokens.contains_key(&event.from_address) { + continue; + } + + cursor_map.insert(event.from_address, *transaction_hash); let event_id = format!("{:#064x}:{:#x}:{:#04x}", block_number, *transaction_hash, event_idx); Self::process_event( self, block_number, block_timestamp, &event_id, event, *transaction_hash, ) .await?; - - if event.from_address == self.world.address || self.tokens.contains_key(&event.from_address) { - cursor_map.insert(event.from_address, *transaction_hash); - } }This optimization will prevent unnecessary formatting and method calls for events that will be skipped anyway.
Line range hint
706-783
: Ohayo, sensei! Impressive event processing enhancements!The updates to the
process_event
method are significant improvements. Supporting multiple processors per event increases flexibility, and the parallelization approach for specific event types can boost performance.Consider the following improvements:
- Extend parallelization to support multiple processors:
-let task_identifier = if processors.len() == 1 { - match processors[0].event_key().as_str() { +let task_identifier = if processors.iter().any(|p| matches!(p.event_key().as_str(), + "StoreSetRecord" | "StoreUpdateRecord" | "StoreUpdateMember" | "StoreDelRecord")) { // ... (rest of the match remains the same)
Add a comment explaining the purpose and mechanism of the task identifier calculation.
Consider making the list of parallelizable event types configurable, perhaps as a constant or part of the engine's configuration.
These changes would make the parallelization more flexible and easier to maintain in the future.
791-816
: Sugoi new utility function, sensei!The
get_all_events
function is a great addition for handling event pagination. It effectively fetches all events for a given filter in chunks.Consider enhancing the function with error handling and logging:
async fn get_all_events<P>( provider: &P, events_filter: EventFilter, events_chunk_size: u64, ) -> Result<(Option<Felt>, Vec<EventsPage>)> where P: Provider + Sync, { let mut events_pages = Vec::new(); let mut continuation_token = None; + let mut attempt = 0; + const MAX_ATTEMPTS: u32 = 3; loop { - let events_page = provider - .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) - .await?; + let events_page = match provider + .get_events(events_filter.clone(), continuation_token.clone(), events_chunk_size) + .await + { + Ok(page) => page, + Err(e) if attempt < MAX_ATTEMPTS => { + warn!(target: LOG_TARGET, error = %e, attempt = attempt, "Error fetching events, retrying"); + attempt += 1; + continue; + } + Err(e) => return Err(e.into()), + }; + + attempt = 0; continuation_token = events_page.continuation_token.clone(); events_pages.push(events_page); if continuation_token.is_none() { + info!(target: LOG_TARGET, "Fetched all events for filter: {:?}", events_filter); break; } } Ok((events_filter.address, events_pages)) }These changes add retry logic for transient errors and improve logging, making the function more robust and easier to debug.
Line range hint
1-816
: Ohayo, sensei! Overall, a fantastic update to the Engine!The changes in this file significantly enhance the capabilities of the Engine, particularly in supporting ERC token indexing and improving performance through concurrent event fetching and processing. The new functionality is well-integrated with the existing codebase, maintaining the overall structure while extending its capabilities.
Key improvements:
- Addition of ERC token support
- Implementation of concurrent event fetching
- Enhanced event processing with support for multiple processors
- New utility function for paginated event fetching
While the changes are generally excellent, consider the following architectural improvements:
- Error handling: Implement more robust error handling throughout the file, especially in concurrent operations.
- Logging: Enhance logging to provide better visibility into the engine's operations, particularly during event fetching and processing.
- Configuration: Consider making some hardcoded values (like parallelizable event types) configurable to increase flexibility.
- Performance monitoring: Add instrumentation to measure the performance impact of these changes in production.
These suggestions will further improve the robustness and maintainability of the Engine. Great work on this update, sensei!
crates/torii/core/src/sql/mod.rs (5)
34-39
: Ohayo sensei! New modules look great, but let's add some documentation.The addition of
erc
andutils
modules, along with the test file renaming, seems to be part of a good restructuring effort. However, it would be helpful to add some brief documentation comments for these new modules to explain their purpose.Consider adding module-level documentation like this:
/// Module for handling ERC contract-related functionality pub mod erc; /// Utility functions used across the SQL module pub mod utils;
49-55
: Ohayo sensei! The new Cursors struct looks useful, but it needs some explanation.The
Cursors
struct is a great addition for managing indexing state across multiple contracts. However, it would benefit from some documentation to explain its purpose and the meaning of each field.Consider adding documentation like this:
/// Represents the current state of indexing across multiple contracts #[derive(Debug, Clone)] pub struct Cursors { /// Map of contract addresses to their last processed transaction pub cursor_map: HashMap<Felt, Felt>, /// The last pending block transaction across all contracts pub last_pending_block_tx: Option<Felt>, /// The current head (latest processed block number) pub head: Option<u64>, }
Line range hint
68-98
: Ohayo sensei! The Sql::new method looks great with the ERC contract handling.The addition of ERC contract initialization in the
Sql::new
method is a solid improvement. It allows for efficient setup of the database with contract data.Consider using
insert_or_ignore_contract
helper method to reduce code duplication:fn insert_or_ignore_contract(&mut self, id: Felt, contract_type: &str) { self.query_queue.enqueue( "INSERT OR IGNORE INTO contracts (id, contract_address, contract_type) VALUES (?, ?, ?)", vec![ Argument::FieldElement(id), Argument::FieldElement(id), Argument::String(contract_type.to_string()), ], QueryType::Other, ); } // In the new method: self.insert_or_ignore_contract(world_address, WORLD_CONTRACT_TYPE); for contract in erc_contracts.values() { self.insert_or_ignore_contract(contract.contract_address, &contract.r#type.to_string()); }This refactoring would make the code more maintainable and reduce the risk of inconsistencies if the insertion logic needs to be updated in the future.
Line range hint
125-141
: Ohayo sensei! The Sql::head method update looks good, but let's improve error handling.The modification to allow querying the head of a specific contract is a great improvement. The use of the
?
operator for error propagation is also commendable.Consider using
map_err
to provide more context to potential errors:let indexer: (Option<i64>, Option<String>, Option<String>, String) = indexer_query .fetch_one(&mut *conn) .await .map_err(|e| anyhow!("Failed to fetch head for contract {}: {}", contract, e))?; Ok(( indexer.0.map(|h| h.try_into().map_err(|_| anyhow!("Head value doesn't fit in u64"))?).unwrap_or(0), indexer.1.map(|f| Felt::from_str(&f)).transpose()?, indexer.2.map(|f| Felt::from_str(&f)).transpose()?, ))This change would provide more informative error messages, making debugging easier.
Ohayo sensei! The
felts_sql_string
function is still being used in multiple files, including tests and other modules. Removing it may lead to unexpected issues elsewhere in the codebase.Analysis chain
Line range hint
1-19
: Ohayo sensei! The import changes look good, but let's consider their implications.The addition of
HashMap
andchrono
imports, along with the removal offelts_sql_string
, suggests some significant changes in functionality. These modifications appear to be in line with the new features being implemented.To ensure these changes don't cause any issues elsewhere in the codebase, let's run a quick check:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any remaining uses of felts_sql_string rg "felts_sql_string" --type rustLength of output: 1608
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
Files selected for processing (26)
- bin/torii/src/main.rs (7 hunks)
- bin/torii/torii.toml (1 hunks)
- crates/torii/core/Cargo.toml (1 hunks)
- crates/torii/core/src/engine.rs (24 hunks)
- crates/torii/core/src/lib.rs (0 hunks)
- crates/torii/core/src/processors/erc20_legacy_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc20_transfer.rs (1 hunks)
- crates/torii/core/src/processors/erc721_transfer.rs (1 hunks)
- crates/torii/core/src/processors/mod.rs (2 hunks)
- crates/torii/core/src/processors/store_set_record.rs (1 hunks)
- crates/torii/core/src/sql/erc.rs (1 hunks)
- crates/torii/core/src/sql/mod.rs (9 hunks)
- crates/torii/core/src/sql/query_queue.rs (1 hunks)
- crates/torii/core/src/sql/test.rs (5 hunks)
- crates/torii/core/src/sql/utils.rs (1 hunks)
- crates/torii/core/src/types.rs (2 hunks)
- crates/torii/graphql/src/tests/metadata_test.rs (3 hunks)
- crates/torii/graphql/src/tests/mod.rs (3 hunks)
- crates/torii/graphql/src/tests/subscription_test.rs (6 hunks)
- crates/torii/grpc/src/server/tests/entities_test.rs (3 hunks)
- crates/torii/libp2p/src/server/mod.rs (1 hunks)
- crates/torii/libp2p/src/tests.rs (2 hunks)
- crates/torii/migrations/20240913104418_add_erc.sql (1 hunks)
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql (1 hunks)
- scripts/deploy_erc20_katana.sh (1 hunks)
- scripts/send_erc20_transfer.sh (1 hunks)
Files not reviewed due to no reviewable changes (1)
- crates/torii/core/src/lib.rs
Files skipped from review as they are similar to previous changes (22)
- bin/torii/torii.toml
- crates/torii/core/Cargo.toml
- crates/torii/core/src/processors/erc20_legacy_transfer.rs
- crates/torii/core/src/processors/erc20_transfer.rs
- crates/torii/core/src/processors/erc721_transfer.rs
- crates/torii/core/src/processors/mod.rs
- crates/torii/core/src/processors/store_set_record.rs
- crates/torii/core/src/sql/erc.rs
- crates/torii/core/src/sql/query_queue.rs
- crates/torii/core/src/sql/test.rs
- crates/torii/core/src/sql/utils.rs
- crates/torii/core/src/types.rs
- crates/torii/graphql/src/tests/metadata_test.rs
- crates/torii/graphql/src/tests/mod.rs
- crates/torii/graphql/src/tests/subscription_test.rs
- crates/torii/grpc/src/server/tests/entities_test.rs
- crates/torii/libp2p/src/server/mod.rs
- crates/torii/libp2p/src/tests.rs
- crates/torii/migrations/20240913104418_add_erc.sql
- crates/torii/migrations/20240918200125_rename_column_contracts_table.sql
- scripts/deploy_erc20_katana.sh
- scripts/send_erc20_transfer.sh
Additional comments not posted (4)
bin/torii/src/main.rs (3)
35-37
: Ohayo sensei! Nice additions for ERC support!The new imports for ERC20 and ERC721 processors, along with the
erc_contracts
field in theArgs
struct, set the stage for the new ERC contract support feature. These changes align well with the PR objectives.Also applies to: 148-151
219-225
: Ohayo sensei! Excellent integration of ERC contract support!The changes in the Sql and Engine initialization are well-implemented:
- The Sql initialization now includes ERC contracts, allowing for proper database handling.
- New processors for ERC20 and ERC721 transfers have been added to the
Processors
struct.- The Engine initialization includes ERC contracts, ensuring they're considered during processing.
These modifications seamlessly integrate ERC contract support into the existing system.
Also applies to: 236-238, 269-269
Line range hint
1-391
: Ohayo sensei! Great work on implementing ERC contract support!The changes in this file successfully integrate ERC contract support into the Torii binary executable. Here's a summary of the key modifications:
- Added new imports and struct fields for ERC contract handling.
- Modified the main function to process ERC contracts from config or command-line arguments.
- Updated Sql and Engine initialization to include ERC contract support.
- Implemented a new
parse_erc_contracts
function for parsing ERC contract information.These changes align well with the PR objectives and enhance the functionality of the system. The implementation is generally well-structured and integrated seamlessly into the existing codebase.
To further improve the code:
- Consider optimizing the start block calculation as suggested earlier.
- Enhance error handling in the
parse_erc_contracts
function for better robustness.Overall, great job on this feature implementation, sensei!
crates/torii/core/src/engine.rs (1)
Line range hint
134-143
: LGTM! Constructor updated for ERC token support.The
Engine::new
method has been correctly updated to include thetokens
parameter, ensuring that the engine can be initialized with the ERC tokens to be indexed.Also applies to: 153-153
bin/torii/src/main.rs
Outdated
// Parses clap cli argument which is expected to be in the format: | ||
// - erc_type:address:start_block | ||
// - address:start_block (erc_type defaults to ERC20) | ||
fn parse_erc_contracts(s: &str) -> anyhow::Result<Vec<ErcContract>> { | ||
let parts: Vec<&str> = s.split(',').collect(); | ||
let mut contracts = Vec::new(); | ||
for part in parts { | ||
match part.split(':').collect::<Vec<&str>>().as_slice() { | ||
[r#type, address, start_block] => { | ||
let contract_address = Felt::from_str(address).unwrap(); | ||
let start_block = start_block.parse::<u64>()?; | ||
let r#type = r#type.parse::<ErcType>()?; | ||
contracts.push(ErcContract { contract_address, start_block, r#type }); | ||
} | ||
[address, start_block] => { | ||
let contract_address = Felt::from_str(address)?; | ||
let start_block = start_block.parse::<u64>()?; | ||
let r#type = ErcType::default(); | ||
contracts.push(ErcContract { contract_address, start_block, r#type }); | ||
} | ||
_ => return Err(anyhow::anyhow!("Invalid ERC contract format")), | ||
} | ||
} | ||
Ok(contracts) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Let's enhance the parse_erc_contracts
function!
The function is a good start, but we can make it more robust:
- Improve error handling:
use thiserror::Error;
#[derive(Error, Debug)]
enum ParseError {
#[error("Invalid ERC contract format")]
InvalidFormat,
#[error("Invalid address: {0}")]
InvalidAddress(#[from] starknet::core::types::FieldElementError),
#[error("Invalid start block: {0}")]
InvalidStartBlock(#[from] std::num::ParseIntError),
#[error("Invalid ERC type: {0}")]
InvalidErcType(#[from] std::string::ParseError),
}
fn parse_erc_contracts(s: &str) -> Result<Vec<ErcContract>, ParseError> {
// ... (rest of the function)
}
- Replace
unwrap()
with proper error propagation:
- let contract_address = Felt::from_str(address).unwrap();
+ let contract_address = Felt::from_str(address)?;
- Add input validation:
if s.is_empty() {
return Err(ParseError::InvalidFormat);
}
- Use
?
consistently for error propagation.
These changes will make the function more robust and provide better error messages.
bin/torii/src/main.rs
Outdated
let mut start_block = args.start_block; | ||
|
||
let mut config = if let Some(path) = args.config { | ||
ToriiConfig::load_from_path(&path)? | ||
} else { | ||
ToriiConfig::default() | ||
}; | ||
|
||
if let Some(erc_contracts) = args.erc_contracts { | ||
config.erc_contracts = erc_contracts; | ||
} | ||
|
||
for address in &config.erc_contracts { | ||
if address.start_block < start_block { | ||
start_block = address.start_block; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ohayo sensei! Let's fine-tune the start block calculation.
The logic for handling ERC contracts from config or command-line arguments looks good. However, there's a small optimization we can make in the start block calculation:
- let mut start_block = args.start_block;
+ let mut start_block = args.start_block;
if let Some(erc_contracts) = args.erc_contracts {
config.erc_contracts = erc_contracts;
}
- for address in &config.erc_contracts {
- if address.start_block < start_block {
- start_block = address.start_block;
- }
- }
+ start_block = config.erc_contracts.iter()
+ .map(|contract| contract.start_block)
+ .min()
+ .unwrap_or(start_block);
This change uses Rust's iterator methods to find the minimum start block more efficiently and concisely.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let mut start_block = args.start_block; | |
let mut config = if let Some(path) = args.config { | |
ToriiConfig::load_from_path(&path)? | |
} else { | |
ToriiConfig::default() | |
}; | |
if let Some(erc_contracts) = args.erc_contracts { | |
config.erc_contracts = erc_contracts; | |
} | |
for address in &config.erc_contracts { | |
if address.start_block < start_block { | |
start_block = address.start_block; | |
} | |
} | |
let mut start_block = args.start_block; | |
let mut config = if let Some(path) = args.config { | |
ToriiConfig::load_from_path(&path)? | |
} else { | |
ToriiConfig::default() | |
}; | |
if let Some(erc_contracts) = args.erc_contracts { | |
config.erc_contracts = erc_contracts; | |
} | |
start_block = config.erc_contracts.iter() | |
.map(|contract| contract.start_block) | |
.min() | |
.unwrap_or(start_block); |
pub(crate) async fn cursors(&self) -> Result<Cursors> { | ||
let mut conn: PoolConnection<Sqlite> = self.pool.acquire().await?; | ||
let cursors = sqlx::query_as::<_, (String, String)>( | ||
"SELECT contract_address, last_pending_block_contract_tx FROM contracts WHERE \ | ||
last_pending_block_contract_tx IS NOT NULL", | ||
) | ||
.fetch_all(&mut *conn) | ||
.await?; | ||
|
||
let (head, last_pending_block_tx) = sqlx::query_as::<_, (Option<i64>, Option<String>)>( | ||
"SELECT head, last_pending_block_tx FROM contracts WHERE 1=1", | ||
) | ||
.fetch_one(&mut *conn) | ||
.await?; | ||
|
||
let head = head.map(|h| h.try_into().expect("doesn't fit in u64")); | ||
let last_pending_block_tx = | ||
last_pending_block_tx.map(|t| Felt::from_str(&t).expect("its a valid felt")); | ||
Ok(Cursors { | ||
cursor_map: cursors | ||
.into_iter() | ||
.map(|(c, t)| { | ||
( | ||
Felt::from_str(&c).expect("its a valid felt"), | ||
Felt::from_str(&t).expect("its a valid felt"), | ||
) | ||
}) | ||
.collect(), | ||
last_pending_block_tx, | ||
head, | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! The new cursors method is a great addition, but let's make it more robust.
The cursors
method is an excellent addition for managing the indexing state across multiple contracts. It provides a comprehensive view of the current indexing status.
However, the use of expect
for error handling could lead to panics. Let's improve this:
pub(crate) async fn cursors(&self) -> Result<Cursors> {
let mut conn: PoolConnection<Sqlite> = self.pool.acquire().await?;
let cursors = sqlx::query_as::<_, (String, String)>(
"SELECT contract_address, last_pending_block_contract_tx FROM contracts WHERE last_pending_block_contract_tx IS NOT NULL",
)
.fetch_all(&mut *conn)
.await?;
let (head, last_pending_block_tx) = sqlx::query_as::<_, (Option<i64>, Option<String>)>(
"SELECT head, last_pending_block_tx FROM contracts WHERE 1=1",
)
.fetch_one(&mut *conn)
.await?;
let head = head.map(|h| h.try_into().map_err(|_| anyhow!("Head value doesn't fit in u64")))?;
let last_pending_block_tx = last_pending_block_tx
.map(|t| Felt::from_str(&t).map_err(|_| anyhow!("Invalid Felt string")))
.transpose()?;
Ok(Cursors {
cursor_map: cursors
.into_iter()
.map(|(c, t)| Ok((Felt::from_str(&c)?, Felt::from_str(&t)?)))
.collect::<Result<HashMap<Felt, Felt>>>()?,
last_pending_block_tx,
head,
})
}
This version propagates errors instead of panicking, making the function more robust and easier to use in error-handling contexts.
pub fn update_cursors( | ||
&mut self, | ||
head: u64, | ||
last_pending_block_tx: Option<Felt>, | ||
cursor_map: HashMap<Felt, Felt>, | ||
) { | ||
let head = Argument::Int(head.try_into().expect("doesn't fit in u64")); | ||
let last_pending_block_tx = if let Some(f) = last_pending_block_tx { | ||
Argument::String(format!("{:#x}", f)) | ||
} else { | ||
Argument::Null | ||
}; | ||
|
||
self.query_queue.enqueue( | ||
"UPDATE contracts SET head = ?, last_pending_block_tx = ? WHERE 1=1", | ||
vec![head, last_pending_block_tx], | ||
QueryType::Other, | ||
); | ||
|
||
for cursor in cursor_map { | ||
let tx = Argument::FieldElement(cursor.1); | ||
let contract = Argument::FieldElement(cursor.0); | ||
|
||
self.query_queue.enqueue( | ||
"UPDATE contracts SET last_pending_block_contract_tx = ? WHERE id = ?", | ||
vec![tx, contract], | ||
QueryType::Other, | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! The update_cursors method is a great addition, but let's make it more robust.
The update_cursors
function is an excellent addition that allows efficient updating of multiple cursors in a single operation. This is particularly beneficial for performance when dealing with multiple contracts.
However, there are a few areas we can improve:
- Error handling: Replace
expect
with proper error handling. - Transaction management: Consider wrapping the updates in a transaction for atomicity.
Here's a suggested improvement:
pub fn update_cursors(
&mut self,
head: u64,
last_pending_block_tx: Option<Felt>,
cursor_map: HashMap<Felt, Felt>,
) -> Result<()> {
let head = Argument::Int(head.try_into().map_err(|_| anyhow!("Head value doesn't fit in u64"))?);
let last_pending_block_tx = if let Some(f) = last_pending_block_tx {
Argument::String(format!("{:#x}", f))
} else {
Argument::Null
};
self.query_queue.enqueue(
"BEGIN TRANSACTION",
vec![],
QueryType::Other,
);
self.query_queue.enqueue(
"UPDATE contracts SET head = ?, last_pending_block_tx = ? WHERE 1=1",
vec![head, last_pending_block_tx],
QueryType::Other,
);
for (contract, tx) in cursor_map {
self.query_queue.enqueue(
"UPDATE contracts SET last_pending_block_contract_tx = ? WHERE id = ?",
vec![Argument::FieldElement(tx), Argument::FieldElement(contract)],
QueryType::Other,
);
}
self.query_queue.enqueue(
"COMMIT",
vec![],
QueryType::Other,
);
Ok(())
}
This version handles errors properly and wraps the updates in a transaction for atomicity.
// For a given contract address, sets head to the passed value and sets | ||
// last_pending_block_contract_tx and last_pending_block_tx to null | ||
pub fn reset_cursors(&mut self, head: u64) { | ||
let head = Argument::Int(head.try_into().expect("doesn't fit in u64")); | ||
self.query_queue.enqueue( | ||
"UPDATE contracts SET head = ?, last_pending_block_contract_tx = ?, \ | ||
last_pending_block_tx = ? WHERE 1=1", | ||
vec![head, Argument::Null, Argument::Null], | ||
QueryType::Other, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohayo sensei! The reset_cursors method is a nice addition, but let's make it more robust.
The reset_cursors
function is a valuable addition that allows resetting of cursor values. This could be particularly useful in scenarios like resyncing or error recovery.
However, we can improve the error handling:
pub fn reset_cursors(&mut self, head: u64) -> Result<()> {
let head = Argument::Int(head.try_into().map_err(|_| anyhow!("Head value doesn't fit in u64"))?);
self.query_queue.enqueue(
"UPDATE contracts SET head = ?, last_pending_block_contract_tx = NULL, last_pending_block_tx = NULL WHERE 1=1",
vec![head],
QueryType::Other,
);
Ok(())
}
This version handles the potential error when converting head
to Argument::Int
and returns a Result
for better error propagation. Also, it uses NULL
directly in the SQL query instead of Argument::Null
for clarity.
commit-id:d618e874
commit-id:10465a00
commit-id:3bfe5955
commit-id:923dbba6
- cleanup plus other refactor for the PR stack - `start_block` configuration removed since it was no longer necessary now that we use `get_events` for `sync_range` - use debug log level transfer events commit-id:4acad5f3
commit-id:c6195721
Stack:
Summary by CodeRabbit
Release Notes
New Features
torii.toml
) for managing ERC contracts.Bug Fixes
Documentation
Tests
Chores