-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(sozo): upload world/models/contracts metadata only if changed #2691
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made a suggestion on from where we could fetch the current value.
It's a small change in the WorldRemote
, but may be better integrated in the migration flow, wdyt?
1a4b450
to
f38113c
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2691 +/- ##
==========================================
+ Coverage 56.24% 56.48% +0.23%
==========================================
Files 415 421 +6
Lines 53241 53976 +735
==========================================
+ Hits 29948 30488 +540
- Misses 23293 23488 +195 ☔ View full report in Codecov by Sentry. |
4b304de
to
091964a
Compare
WalkthroughOhayo! This pull request introduces several enhancements primarily focused on metadata handling and IPFS integration. Key changes include the addition of IPFS-related options to the Changes
Assessment against linked issues
Possibly related PRs
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (28)
crates/dojo/core/src/model/metadata.cairo (2)
12-12
: Ohayo! The metadata_hash addition looks solid, sensei!The new
metadata_hash
field is a well-thought-out solution that enables efficient change detection for metadata updates. This aligns perfectly with the PR's objective of uploading metadata only when necessary, especially after failed deployments.This approach provides several benefits:
- Efficient change detection through hash comparison
- Prevents unnecessary uploads when metadata hasn't changed
- Enables retry mechanisms after failed deployments
12-12
: Consider adding documentation for the metadata_hash field, sensei!While the implementation is solid, adding documentation comments would help other developers understand the purpose and usage of this new field.
pub struct ResourceMetadata { #[key] pub resource_id: felt252, pub metadata_uri: ByteArray, + /// Hash of the metadata content, used to detect changes and prevent unnecessary uploads pub metadata_hash: felt252 }
examples/spawn-and-move/dojo_dev.toml (2)
18-21
: Consider enhancing the event documentation, sensei!The event description could be more detailed to specify:
- What data is emitted when a player moves
- When exactly this event is triggered
- Who can listen to this event
22-25
: Please elaborate on the contract's actions, sensei!The description "set of actions for a player" could be more specific about:
- What actions are available
- Any constraints or prerequisites for these actions
- How these actions relate to the models and events
crates/dojo/world/src/metadata/ipfs.rs (1)
17-22
: Consider adding retry mechanism for IPFS client initializationThe client initialization looks good, but IPFS services can be occasionally unstable. Consider adding retry logic for better reliability.
Example implementation:
use tokio::time::{sleep, Duration}; pub fn new() -> Result<Self> { let max_retries = 3; let mut attempt = 0; loop { match ipfs_api_backend_hyper::IpfsClient::from_str(IPFS_CLIENT_URL) { Ok(client) => return Ok(Self { client: client.with_credentials(IPFS_USERNAME, IPFS_PASSWORD), }), Err(e) if attempt < max_retries => { attempt += 1; sleep(Duration::from_secs(1)).await; continue; } Err(e) => return Err(e.into()), } } }crates/dojo/world/src/remote/mod.rs (1)
26-27
: Consider adding tests for metadata_hash initialization and updatesOhayo sensei! While the field addition is solid, we should ensure it's properly tested. Consider adding test cases to verify:
- Default initialization of metadata_hash
- Metadata hash updates through the WorldRemote interface
Here's a suggested test case:
#[test] fn test_world_metadata_hash() { let world_remote = WorldRemote::default(); assert_eq!(world_remote.metadata_hash, Felt::ZERO, "Default metadata hash should be zero"); }crates/dojo/core-cairo-test/src/tests/world/metadata.cairo (1)
Line range hint
1-120
: Consider adding edge cases for metadata_hash validation, sensei!While the current test coverage is good, consider adding test cases for:
- Zero hash values
- Maximum u32 hash values
- Hash collisions between different metadata
Here's a suggested test case:
#[test] fn test_set_metadata_edge_cases() { let world = deploy_world(); let world = world.dispatcher; // Test zero hash let metadata_zero = ResourceMetadata { resource_id: 0, metadata_uri: format!("ipfs:zero"), metadata_hash: 0 }; world.set_metadata(metadata_zero.clone()); assert(world.metadata(0) == metadata_zero, 'invalid zero hash'); // Test max hash let metadata_max = ResourceMetadata { resource_id: 0, metadata_uri: format!("ipfs:max"), metadata_hash: 0xffffffff }; world.set_metadata(metadata_max.clone()); assert(world.metadata(0) == metadata_max, 'invalid max hash'); }crates/dojo/world/src/uri.rs (2)
21-35
: Excellent Hash implementation, sensei!The Hash implementation is well-structured and correctly handles all Uri variants. This aligns perfectly with the PR's objective of tracking metadata changes.
Consider adding test cases for the Hash implementation to ensure consistent behavior across variants. Here's a suggested test:
#[test] fn test_uri_hash() { use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; fn calculate_hash<T: Hash>(t: &T) -> u64 { let mut s = DefaultHasher::new(); t.hash(&mut s); s.finish() } let uri1 = Uri::Ipfs("ipfs://QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG".to_string()); let uri2 = Uri::Ipfs("ipfs://QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG".to_string()); let uri3 = Uri::Ipfs("ipfs://different".to_string()); assert_eq!(calculate_hash(&uri1), calculate_hash(&uri2)); assert_ne!(calculate_hash(&uri1), calculate_hash(&uri3)); }
Line range hint
1-69
: Ohayo! Great architectural contribution, sensei!These implementations provide essential building blocks for the PR's objective of uploading metadata only when changed. The Hash implementation enables efficient change detection, while the Display implementation supports better debugging and logging capabilities.
This lays a solid foundation for metadata change tracking. Consider documenting these new capabilities in the module-level documentation to help other developers understand how to leverage them for metadata comparison.
crates/dojo/world/src/diff/resource.rs (2)
116-123
: Ohayo! Add documentation for metadata hash semanticsThe implementation looks solid, sensei! However, it would be helpful to document why
Felt::ZERO
is used for newly created resources and how this relates to the metadata upload process.Consider adding documentation like this:
/// Returns the current metadata hash of the resource. + /// + /// For newly created resources that haven't been deployed yet, returns `Felt::ZERO` + /// as no metadata has been uploaded. For existing resources, returns the hash of + /// their current metadata from the remote state. pub fn metadata_hash(&self) -> Felt {
116-123
: Implementation aligns well with existing patternsThe implementation follows the established pattern of other methods in the enum, correctly handles all variants, and integrates smoothly with the remote metadata functionality. This change supports the PR objective of tracking metadata changes.
Consider adding a helper method to check if metadata needs updating by comparing the local metadata hash with this remote hash. This would support the PR's goal of uploading "only if changed".
crates/sozo/ops/src/tests/migration.rs (2)
89-111
: Add documentation to the helper function, sensei!While the function is well-structured, it would benefit from documentation explaining:
- The purpose of the function
- Parameter descriptions
- The meaning of the checker function
- Return value implications
Here's a suggested documentation block:
+/// Validates resource metadata in a WorldDiff against expected criteria +/// +/// # Arguments +/// * `diff` - The WorldDiff containing resources to check +/// * `resources` - Optional vector of ResourceConfig to validate +/// * `expected_count` - Expected number of resources +/// * `checker` - Function to validate metadata hash values +/// +/// # Panics +/// Panics if assertions fail for resource count or metadata hash validation
113-150
: Excellent test implementation, sensei!The test thoroughly verifies metadata state transitions and covers all resource types. A few observations:
- Good use of helper functions
is_set
andis_not_set
- Comprehensive verification of world and resource metadata
- Clear comments explaining the test flow
One suggestion: Consider adding a test case for error conditions when metadata upload fails.
Here's a suggested additional test case:
#[tokio::test(flavor = "multi_thread")] #[katana_runner::test(accounts = 10)] async fn upload_metadata_handles_errors(sequencer: &RunnerCtx) { // Test implementation for error conditions // This would help verify the error handling in the migration process }crates/dojo/world/src/diff/compare.rs (1)
Line range hint
17-27
: Ohayo! Consider adding metadata_hash comparison in ContractLocal implementationThe comparison logic only checks class_hash but doesn't account for metadata changes. Based on the PR objectives to handle metadata updates, we should also compare metadata_hash.
Here's a suggested implementation:
impl ComparableResource for ContractLocal { fn compare(self, remote: ResourceRemote) -> ResourceDiff { let remote_contract = remote.as_contract_or_panic(); - if self.common.class_hash == remote_contract.common.current_class_hash() { + if self.common.class_hash == remote_contract.common.current_class_hash() + && self.metadata_hash == remote_contract.common.metadata_hash { ResourceDiff::Synced(ResourceLocal::Contract(self), remote) } else { ResourceDiff::Updated(ResourceLocal::Contract(self), remote) } } }crates/dojo/world/src/config/profile_config.rs (2)
21-23
: Ohayo! Well-structured metadata fields, sensei!The new optional fields for models, contracts, and events are well-organized and follow a consistent pattern. This structure aligns perfectly with the PR's objective to improve metadata handling.
Consider documenting these fields with rustdoc comments to help future maintainers understand their purpose and usage patterns.
141-154
: Ohayo! Consider adding edge cases to the test suite, sensei.While the current test coverage is good, consider adding these scenarios:
- Empty vectors for models/contracts/events
- Missing optional fields (description, icon_uri)
- Invalid URI formats
- Duplicate tags within the same resource type
This will help ensure robust handling of various input scenarios.
Here's a suggested test case to add:
#[test] fn test_profile_config_edge_cases() { let content = r#" [world] name = "test" seed = "abcd" [namespace] default = "test" [[models]] tag = "empty-model" [[contracts]] tag = "invalid-contract" icon_uri = "not-a-valid-uri" "#; let config = toml::from_str::<ProfileConfig>(content).unwrap(); // Test empty models assert!(config.models.is_some()); let models = config.models.unwrap(); assert_eq!(models[0].description, None); assert_eq!(models[0].icon_uri, None); // Test invalid URI assert!(config.contracts.is_some()); let contracts = config.contracts.unwrap(); assert!(contracts[0].icon_uri.is_none()); }Also applies to: 212-232
crates/dojo/world/src/diff/mod.rs (2)
82-82
: Consider documenting the ZERO initialization significanceThe initialization of
metadata_hash
toFelt::ZERO
infrom_local
implies this is a new world without existing metadata, but this significance isn't documented.- metadata_hash: Felt::ZERO, + // Initialize to ZERO for new worlds without existing metadata + metadata_hash: Felt::ZERO,
Line range hint
32-123
: Ohayo sensei! Clean and focused metadata tracking implementationThe approach of using a dedicated
metadata_hash
field inWorldStatusInfo
is elegant and maintainable. It:
- Cleanly separates metadata state tracking from other world state
- Enables efficient metadata change detection
- Maintains backward compatibility with existing world diff logic
Consider documenting this design decision in the module-level documentation to help future maintainers understand the rationale.
crates/dojo/world/src/remote/events_to_remote.rs (2)
254-264
: LGTM with a minor enhancement suggestionOhayo sensei! The MetadataUpdate event handling looks solid with proper separation between world and resource metadata updates.
Consider adding trace logging for world metadata updates similar to resource updates for consistency.
if e.resource == Felt::ZERO { self.metadata_hash = e.hash; + trace!("World metadata updated with hash: {:#066x}", e.hash); } else {
532-563
: Add test coverage for world metadata updatesOhayo sensei! While the current test thoroughly covers resource metadata updates, we should also add test coverage for world metadata updates (when resource == Felt::ZERO).
#[tokio::test] async fn test_world_metadata_updated_event() { let mut world_remote = WorldRemote::default(); let event = WorldEvent::MetadataUpdate(world::MetadataUpdate { resource: Felt::ZERO, uri: ByteArray::from_string("ipfs://world").unwrap(), hash: Felt::THREE, }); world_remote.match_event(event).unwrap(); assert_eq!(world_remote.metadata_hash, Felt::THREE); }crates/dojo/core/src/world/world_contract.cairo (1)
114-115
: Consider adding documentation for the hash field usage, sensei!While the implementation is correct, it would be helpful to add documentation explaining:
- The hash calculation method
- When the hash changes
- How to handle hash conflicts
Add documentation above the MetadataUpdate event:
+ /// Emitted when a resource's metadata is updated + /// * `resource` - The resource identifier + /// * `uri` - The metadata URI + /// * `hash` - Hash of the metadata content for change detection #[derive(Drop, starknet::Event)] pub struct MetadataUpdate {Also applies to: 360-364
crates/dojo/world/src/metadata/mod.rs (3)
46-65
: Refactor to eliminate code duplication between implementationsOhayo, sensei! The
upload
methods in bothWorldMetadata
andResourceMetadata
share similar logic for handling metadata uploads. To adhere to the DRY (Don't Repeat Yourself) principle, consider refactoring the common code into a helper function or a generic implementation. This will enhance maintainability and readability.Here's how you might refactor the code:
Create a generic helper function:
async fn upload_metadata<T>(meta: &mut T) -> Result<String> where T: Clone + Serialize + HasIconUri + HasCoverUri, { let ipfs_client = IpfsClient::new()?; if let Some(Uri::File(icon)) = meta.icon_uri() { let icon_data = std::fs::read(icon)?; meta.set_icon_uri(Some(Uri::Ipfs(ipfs_client.upload(icon_data).await?))); } if let Some(Uri::File(cover)) = meta.cover_uri() { let cover_data = std::fs::read(cover)?; meta.set_cover_uri(Some(Uri::Ipfs(ipfs_client.upload(cover_data).await?))); } let serialized = json!(meta).to_string(); ipfs_client.upload(serialized).await }Define traits for accessing and modifying URIs:
trait HasIconUri { fn icon_uri(&self) -> &Option<Uri>; fn set_icon_uri(&mut self, uri: Option<Uri>); } trait HasCoverUri { fn cover_uri(&self) -> &Option<Uri>; fn set_cover_uri(&mut self, uri: Option<Uri>); }Implement the traits for your metadata structs:
impl HasIconUri for WorldMetadata { fn icon_uri(&self) -> &Option<Uri> { &self.icon_uri } fn set_icon_uri(&mut self, uri: Option<Uri>) { self.icon_uri = uri; } } impl HasCoverUri for WorldMetadata { fn cover_uri(&self) -> &Option<Uri> { &self.cover_uri } fn set_cover_uri(&mut self, uri: Option<Uri>) { self.cover_uri = uri; } } impl HasIconUri for ResourceMetadata { fn icon_uri(&self) -> &Option<Uri> { &self.icon_uri } fn set_icon_uri(&mut self, uri: Option<Uri>) { self.icon_uri = uri; } } // ResourceMetadata may not have a cover_uri; implement accordinglyUpdate the
upload
methods:#[async_trait] impl MetadataStorage for WorldMetadata { async fn upload(&self) -> Result<String> { let mut meta = self.clone(); upload_metadata(&mut meta).await } } #[async_trait] impl MetadataStorage for ResourceMetadata { async fn upload(&self) -> Result<String> { let mut meta = self.clone(); upload_metadata(&mut meta).await } }Adjust the traits and helper functions based on the specific fields each metadata struct contains.
Also applies to: 67-81
48-48
: Assess the impact of cloning large metadata structuresOhayo, sensei! Cloning
self
to createmeta
can be costly if the metadata structures are large. Consider optimizing to clone only the necessary parts or borrow mutably if possible.Example modification:
- Instead of cloning the entire object, create a new instance with updated fields or refactor methods to work with references where appropriate.
Also applies to: 69-69
52-60
: Handle potential errors when reading files and uploading to IPFSOhayo, sensei! When reading files and uploading data to IPFS, consider adding more robust error handling or user-friendly error messages to assist in debugging if file reading fails or upload issues occur.
Example:
- Wrap file reading and uploading in a custom function that provides context-specific error messages.
Also applies to: 73-76
crates/dojo/world/src/metadata/metadata_test.rs (2)
21-23
: Sensei, consider using crate-relative paths to ensure test data is foundUsing relative paths like
"./src/metadata/metadata_test_data/cover.png"
may cause tests to fail if they're run from a different working directory. Ohayo! To make the tests more robust, you might useenv!("CARGO_MANIFEST_DIR")
to build paths relative to the crate root.You can modify the path construction as follows:
-PathBuf::from_str("./src/metadata/metadata_test_data/cover.png").unwrap() +PathBuf::from(format!( + "{}/src/metadata/metadata_test_data/cover.png", + env!("CARGO_MANIFEST_DIR") +))Also applies to: 45-47
137-137
: Consider mockingIpfsClient
for testsOhayo, sensei! The TODO comment suggests that it would be beneficial to fake or mock the
IpfsClient
for tests to avoid actual IPFS calls and improve test reliability and speed.Would you like assistance in implementing a mock
IpfsClient
for your tests?crates/sozo/ops/src/migrate/mod.rs (2)
172-189
: Avoid unnecessary cloning ofitem
Ohayo, sensei! To improve efficiency, consider passing a reference to
item
instead of cloning it when creatingnew_metadata
.Apply this diff:
- let new_metadata = metadata_config::ResourceMetadata::from(item.clone()); + let new_metadata = metadata_config::ResourceMetadata::from(item);
154-155
: Simplify error handling by avoiding string conversionOhayo, sensei! Instead of converting errors to strings and then wrapping them with
anyhow!
, you can useanyhow::Error::from
to preserve error context.Apply this diff:
- invoker.multicall().await.map_err(|e| anyhow!(e.to_string()))?; + invoker.multicall().await.map_err(anyhow::Error::from)?;And similarly later:
- invoker.invoke_all_sequentially().await.map_err(|e| anyhow!(e.to_string()))?; + invoker.invoke_all_sequentially().await.map_err(anyhow::Error::from)?;Also applies to: 160-161
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (4)
Cargo.lock
is excluded by!**/*.lock
crates/dojo/world/src/metadata/metadata_test_data/icon.png
is excluded by!**/*.png
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (27)
bin/sozo/src/commands/migrate.rs
(1 hunks)crates/dojo/core-cairo-test/src/tests/world/metadata.cairo
(6 hunks)crates/dojo/core/src/model/metadata.cairo
(1 hunks)crates/dojo/core/src/world/world_contract.cairo
(2 hunks)crates/dojo/world/Cargo.toml
(1 hunks)crates/dojo/world/src/config/metadata_config.rs
(2 hunks)crates/dojo/world/src/config/mod.rs
(1 hunks)crates/dojo/world/src/config/profile_config.rs
(4 hunks)crates/dojo/world/src/config/resource_config.rs
(1 hunks)crates/dojo/world/src/contracts/abigen/world.rs
(8 hunks)crates/dojo/world/src/diff/compare.rs
(3 hunks)crates/dojo/world/src/diff/mod.rs
(3 hunks)crates/dojo/world/src/diff/resource.rs
(1 hunks)crates/dojo/world/src/metadata/ipfs.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test_data/abi.json
(0 hunks)crates/dojo/world/src/metadata/metadata_test_data/source.cairo
(0 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/dojo/world/src/metadata/world.rs
(0 hunks)crates/dojo/world/src/remote/events_to_remote.rs
(3 hunks)crates/dojo/world/src/remote/mod.rs
(1 hunks)crates/dojo/world/src/remote/resource.rs
(3 hunks)crates/dojo/world/src/uri.rs
(3 hunks)crates/sozo/ops/Cargo.toml
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)examples/spawn-and-move/dojo_dev.toml
(2 hunks)
💤 Files with no reviewable changes (3)
- crates/dojo/world/src/metadata/metadata_test_data/abi.json
- crates/dojo/world/src/metadata/metadata_test_data/source.cairo
- crates/dojo/world/src/metadata/world.rs
🔇 Additional comments (39)
crates/dojo/world/src/config/resource_config.rs (1)
1-4
: Ohayo! Clean and minimal imports, looking good!
The imports are well-organized and include only what's necessary for the functionality.
crates/dojo/world/src/config/mod.rs (2)
7-7
: Ohayo! Clean module organization, sensei!
The addition of the resource_config
module and its public export follows the established pattern and supports the enhanced metadata handling capabilities.
Also applies to: 14-14
7-7
: Let's verify the module integration, sensei!
Since this module is part of the metadata handling improvements, let's verify its integration with related modules.
Also applies to: 14-14
✅ Verification successful
Ohayo sensei! The ResourceConfig integration looks solid!
The verification shows that ResourceConfig is well-integrated across the codebase:
- Used in profile_config.rs for models, contracts, and events configuration
- Properly integrated with metadata handling through metadata_config.rs
- Utilized in migration operations (sozo/ops)
- Correctly exported in the module hierarchy
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the ResourceConfig usage across the codebase
# Expected: Find usage in profile_config.rs and metadata handling code
# Check for ResourceConfig usage
echo "Checking ResourceConfig usage..."
rg -p "ResourceConfig" --type rust
# Check for relationship with metadata handling
echo "Checking metadata-related code..."
rg -p "metadata" --type rust --glob "crates/dojo/world/src/**"
Length of output: 10333
examples/spawn-and-move/dojo_dev.toml (2)
34-34
: Verify world address configuration, sensei!
The world address has been updated. Let's ensure this new address is properly configured for metadata handling.
#!/bin/bash
# Search for references to the world address
rg "0x52ee4d3cba58d1a0462bbfb6813bf5aa1b35078c3b859cded2b727c1d9469ea"
# Check for any world initialization or metadata upload scripts
fd -g "*world*.{sh,rs,cairo}" --exec grep -l "metadata"
6-17
: Ohayo! Please provide schema details for the models, sensei!
While the models' descriptions are clear, it would be helpful to include the schema information (fields, types, etc.) for each model to ensure proper integration with the metadata handling system.
Let's check if schema information exists elsewhere:
crates/dojo/world/Cargo.toml (1)
33-33
: LGTM! The futures dependency addition is well-aligned.
Ohayo! The addition of futures
as a dev-dependency makes perfect sense for testing the new async metadata upload functionality, sensei! It complements the existing async-related dependencies like async-trait
and tokio
.
crates/sozo/ops/Cargo.toml (2)
16-16
: Ohayo! The metadata feature addition looks good, sensei!
The addition of the metadata
feature to dojo-world
aligns perfectly with the PR objectives for improving metadata handling. This change enables the core functionality needed for the new metadata upload process.
Let's verify the feature usage:
✅ Verification successful
Ohayo! The metadata feature is properly utilized, sensei!
The verification shows that the metadata
feature from dojo-world
is actively used in the sozo crate for:
- Uploading world and resource metadata to IPFS
- Managing metadata hashes and URIs
- Handling metadata storage and updates
The feature addition is necessary and correctly implemented, with clear usage in crates/sozo/ops/src/migrate/mod.rs
for the metadata upload functionality.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the metadata feature is being used in the codebase
# Search for metadata-related functionality usage
rg -l "use.*dojo_world.*metadata"
# Search for potential metadata feature gates
rg "#\[cfg\(feature.*metadata\)\]"
Length of output: 11950
Script:
#!/bin/bash
# Let's verify the metadata feature usage in dojo-world and sozo
# First, check for metadata-related imports and usage in sozo
rg -A 5 "use dojo_world::metadata" crates/sozo/
# Then check for metadata-related functionality in sozo
rg -A 5 "metadata" crates/sozo/ops/src/
Length of output: 13032
Line range hint 1-50
: Verify impact of removing ipfs-api-backend-hyper, sensei
The removal of ipfs-api-backend-hyper
from dev-dependencies might affect metadata-related tests. Let's ensure this doesn't impact the metadata upload functionality.
✅ Verification successful
The removal of ipfs-api-backend-hyper dependency is safe
The IPFS functionality in sozo-ops is handled through the dojo-world crate's metadata feature, which we can see is properly configured in the dependencies. The IPFS client dependency was only needed in dev-dependencies for testing, and those tests are now properly handled in dojo-world's metadata tests.
- The only IPFS references in sozo-ops are:
- A comment noting that IPFS testing is handled in dojo-world
- A function doc comment describing metadata upload functionality
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining IPFS-related functionality that might be affected
# Search for IPFS-related imports or usage
rg -i "ipfs|ipfs_api"
# Look for any test files that might need the IPFS backend
fd -e rs test_ --exec rg -l "ipfs"
Length of output: 11950
Script:
#!/bin/bash
# Let's check if sozo-ops has any direct IPFS functionality or tests that might be affected
# Check for IPFS-related code in sozo-ops
rg -i "ipfs" crates/sozo/ops/ -A 5
# Check if sozo-ops has any dependencies on IPFS-related crates in its dependency tree
cargo tree -p sozo-ops | rg -i "ipfs"
# Check if there are any IPFS-related imports in sozo-ops
ast-grep --pattern 'use $_ :: $$$ipfs$$$'
Length of output: 1233
crates/dojo/world/src/metadata/ipfs.rs (2)
12-14
: LGTM! Well-structured client wrapper
The struct provides good encapsulation of the underlying IPFS client implementation.
41-49
: LGTM! Well-implemented test utility
The test-only get method is well-implemented with proper URI handling and error propagation.
crates/dojo/world/src/remote/mod.rs (2)
26-27
: Ohayo! The metadata_hash field addition looks good, sensei!
The new field is well-documented and properly integrated into the struct. The use of Felt
type is appropriate for Starknet compatibility.
26-27
: Verify metadata_hash usage across the codebase
Ohayo sensei! Let's ensure the new field is properly utilized in related components.
✅ Verification successful
metadata_hash field is properly integrated across the codebase
Ohayo sensei! After reviewing the implementation, I can confirm that the metadata_hash field is well-integrated:
- Properly initialized in resource creation (default: Felt::ZERO)
- Has getter/setter methods in ResourceRemote
- Correctly handled in MetadataUpdate events
- Properly tracked in ResourceDiff for state management
- Used in test assertions verifying the update functionality
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for metadata_hash usage in related files
# Expected: Find references in events_to_remote.rs and other related files
echo "Checking metadata_hash usage in related files..."
rg -l "metadata_hash" ./crates/dojo/world/src/
echo "Checking for potential metadata update handlers..."
ast-grep --pattern 'fn $name($$$) {
$$$
metadata_hash
$$$
}'
Length of output: 678
Script:
#!/bin/bash
# Let's check the actual usage context of metadata_hash in these files
echo "Checking metadata_hash usage in remote/resource.rs..."
rg "metadata_hash" ./crates/dojo/world/src/remote/resource.rs -B 2 -A 2
echo -e "\nChecking metadata_hash usage in metadata/mod.rs..."
rg "metadata_hash" ./crates/dojo/world/src/metadata/mod.rs -B 2 -A 2
echo -e "\nChecking metadata_hash usage in remote/events_to_remote.rs..."
rg "metadata_hash" ./crates/dojo/world/src/remote/events_to_remote.rs -B 2 -A 2
echo -e "\nChecking metadata_hash usage in diff/resource.rs..."
rg "metadata_hash" ./crates/dojo/world/src/diff/resource.rs -B 2 -A 2
echo -e "\nChecking metadata_hash usage in diff/compare.rs..."
rg "metadata_hash" ./crates/dojo/world/src/diff/compare.rs -B 2 -A 2
Length of output: 4078
bin/sozo/src/commands/migrate.rs (1)
78-79
: Ohayo! The metadata upload implementation looks solid, sensei! 🎋
The addition of metadata upload functionality after migration but before manifest writing is well-placed and properly handles errors. This change effectively addresses the issue where metadata wouldn't be uploaded after failed deployments.
However, let's verify the error recovery scenarios to ensure robustness.
✅ Verification successful
Let me gather more information about the metadata upload implementation and error handling.
Ohayo! The metadata upload implementation is well-protected, sensei! 🎋
The implementation shows robust error handling with proper context messages. The codebase includes:
- Comprehensive test coverage for metadata upload in
dojo/world/src/metadata/metadata_test.rs
- Error recovery through the
retry
macro insaya/core/src/macros.rs
with up to 20 retries - Proper error propagation with descriptive context messages
The metadata upload during migration is correctly placed and follows the established error handling patterns in the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for error handling patterns in metadata upload scenarios
# Expected: Find test cases covering metadata upload failures and recovery
# Look for test cases related to metadata upload failures
rg -A 10 "upload_metadata.*fail|fail.*upload_metadata" --type rust
# Look for error recovery scenarios in tests
ast-grep --pattern 'fn test_$_metadata_upload($_) {
$$$
}'
Length of output: 1847
Script:
#!/bin/bash
# Search for metadata upload implementation and error handling
ast-grep --pattern 'impl $_ {
$$$
upload_metadata($$$) {
$$$
}
$$$
}'
# Look for retry mechanisms or error recovery patterns
rg -A 5 "retry|recover|fallback" --type rust
# Find test cases that might cover metadata upload
rg -A 10 "test.*metadata|metadata.*test" --type rust
Length of output: 23182
crates/dojo/core-cairo-test/src/tests/world/metadata.cairo (3)
12-12
: LGTM! The metadata initialization looks good, sensei!
The new metadata_hash
field is properly initialized with a test value of 42.
74-74
: Consistent metadata initialization across negative test cases!
The metadata_hash
field is consistently initialized across all test cases, including permission checks and security scenarios. This ensures thorough validation of the new field under various error conditions.
Also applies to: 89-89, 116-116
Line range hint 33-49
: Ohayo! The metadata event validation is thorough!
The test properly verifies all metadata fields, including the new metadata_hash
. The event assertion at line 49 ensures the hash is correctly propagated through the system.
Let's verify the event structure matches the implementation:
✅ Verification successful
Ohayo! The metadata event validation is perfectly aligned with the contract implementation, sensei!
The test correctly validates all fields of the MetadataUpdate
event:
resource
field matches the contract'sfelt252
typeuri
field matches the contract'sByteArray
typehash
field matches the contract'sfelt252
type
The event structure in the world contract exactly matches what's being tested, including the order and types of all fields.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify MetadataUpdate event structure in the world contract
ast-grep --pattern 'enum Event {
$$$
MetadataUpdate: MetadataUpdate,
$$$
}'
Length of output: 83
Script:
#!/bin/bash
# Let's try a different approach to find the event structure
rg -A 5 "struct MetadataUpdate"
Length of output: 1451
Script:
#!/bin/bash
# Also search for the event enum definition with a more flexible pattern
rg -A 10 "enum Event \{"
Length of output: 28301
Script:
#!/bin/bash
# Search for files that might contain the event definition
fd "world\.cairo$"
Length of output: 212
Script:
#!/bin/bash
# Let's check the world contract for MetadataUpdate event
rg -A 5 "MetadataUpdate" crates/dojo/core/src/world/world_contract.cairo
Length of output: 705
Script:
#!/bin/bash
# Also check the event struct definition
rg -B 5 -A 5 "struct MetadataUpdate" crates/dojo/core/src/world/world_contract.cairo
Length of output: 334
crates/dojo/world/src/uri.rs (1)
1-1
: Ohayo! Clean import addition, sensei!
The new import for Hash and Hasher traits is well-placed and necessary for the Hash implementation.
crates/dojo/world/src/diff/resource.rs (1)
116-123
: Verify metadata upload integration
Let's verify that this implementation properly integrates with the metadata upload logic in other files.
✅ Verification successful
Implementation verified and properly integrated, sensei! ✨
The metadata_hash
implementation in resource.rs
is correctly integrated with the metadata upload system. The code shows:
- In
migrate/mod.rs
, the hash is used to compare current and new metadata states before uploads - In
remote/resource.rs
, there's proper hash management with getters/setters - In
metadata/mod.rs
, there's hash computation support
The implementation aligns perfectly with the PR's goal of uploading metadata only when changed, as evidenced by the comparison logic in the migration module.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for metadata upload implementation that would use this hash
echo "Searching for metadata upload implementation..."
rg -l "metadata_hash" --type rust
# Look for any tests involving metadata hash comparison
echo "Searching for related tests..."
rg -l "test.*metadata.*hash" --type rust
Length of output: 649
Script:
#!/bin/bash
# Let's check how metadata_hash is used in the implementation files
echo "Checking metadata_hash usage in remote/resource.rs..."
rg "metadata_hash" -B 2 -A 2 crates/dojo/world/src/remote/resource.rs
echo "\nChecking metadata_hash usage in metadata/mod.rs..."
rg "metadata_hash" -B 2 -A 2 crates/dojo/world/src/metadata/mod.rs
echo "\nChecking metadata_hash usage in migrate/mod.rs..."
rg "metadata_hash" -B 2 -A 2 crates/sozo/ops/src/migrate/mod.rs
# Let's also check for any metadata comparison logic
echo "\nChecking for metadata comparison implementations..."
ast-grep --pattern 'fn $FUNC_NAME($$$) {
$$$
metadata_hash
$$$
}'
Length of output: 3101
crates/sozo/ops/src/tests/migration.rs (3)
8-8
: Ohayo! The new imports look good, sensei!
The addition of ResourceConfig
from dojo_world::config
aligns well with the PR's metadata handling objectives.
Also applies to: 16-16
75-75
: Excellent backward compatibility maintenance, sensei!
The existing tests have been properly updated to pass false
for with_metadata
, maintaining their original behavior while accommodating the new feature.
Also applies to: 84-87
42-42
: Clean implementation of the metadata upload feature, sensei!
The addition of the with_metadata
parameter and conditional upload logic aligns perfectly with the PR objectives. The error handling is robust with descriptive messages.
Let's verify the error handling behavior:
Also applies to: 63-69
✅ Verification successful
Let me analyze the error handling patterns in the migration-related code to verify the implementation.
Clean error handling implementation confirmed, sensei!
The migrate_spawn_and_move
function properly propagates errors through the MigrationResult
type, which is consistently used across the codebase. The error handling is robust with:
- Proper error propagation using the
?
operator - Structured result type containing both success status and manifest
- Clear separation between metadata and migration concerns
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other error handling patterns in migration-related code
rg -A 2 "expect|unwrap|context" crates/sozo/ops/src/
Length of output: 25177
Script:
#!/bin/bash
# Search for error handling in migration.rs and related files
rg -A 5 "migrate_spawn_and_move|MigrationResult" crates/sozo/ops/src/
Length of output: 3783
crates/dojo/world/src/diff/compare.rs (2)
Line range hint 29-39
: Similar metadata comparison needed for Model and Event implementations, sensei
The ModelLocal and EventLocal implementations also need to be updated to handle metadata changes consistently.
Let's verify if there are any other implementations we might have missed:
Also applies to: 41-51
Line range hint 53-63
: Review namespace metadata handling strategy
The NamespaceLocal comparison uses unreachable!()
for updates. Should we consider allowing metadata updates for namespaces as well?
Let's check if namespaces have metadata in other parts of the codebase:
✅ Verification successful
Let's search for more context about namespace structures and metadata handling.
Ohayo sensei! After analyzing the codebase, I can now provide a definitive response about the namespace metadata concern.
The unreachable!()
for namespace updates is correct and should be maintained
Looking at the struct definitions, we can see that NamespaceLocal
only contains a name
field, while NamespaceRemote
has name
, owners
, and writers
. The comparison logic is intentionally strict because:
- Namespaces are fundamental identity containers that shouldn't change names after creation
- Permission changes (owners/writers) are handled separately through dedicated flows
- There is no additional metadata to sync between local and remote representations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for namespace metadata references
rg -A 3 'namespace.*metadata'
Length of output: 1296
Script:
#!/bin/bash
# Search for namespace struct definitions and implementations
ast-grep --pattern 'struct Namespace {
$$$
}'
# Also check for NamespaceLocal struct
ast-grep --pattern 'struct NamespaceLocal {
$$$
}'
# Look for any metadata fields in these structs
rg -B 2 -A 5 'struct (Namespace|NamespaceLocal)'
Length of output: 4690
crates/dojo/world/src/config/profile_config.rs (1)
12-12
: Ohayo! Clean import addition.
The ResourceConfig
import is well-placed and properly organized with other local imports.
crates/dojo/world/src/remote/resource.rs (2)
179-197
: The new metadata hash methods look well-implemented, sensei!
The implementation follows the established patterns in the codebase and properly handles all resource types. The special handling of Namespace
returning Felt::ZERO
is consistent with similar methods in the file.
Let's verify the test coverage for these new methods:
#!/bin/bash
# Description: Check for test coverage of the new metadata hash methods
# Look for test files that exercise these methods
# Search for test files that might contain metadata hash related tests
echo "=== Checking for metadata hash tests ==="
rg -l "metadata_hash" "$(fd -t f -e rs test)"
# Look for specific test cases
echo "=== Checking for specific test cases ==="
rg "set_metadata_hash|metadata_hash" "$(fd -t f -e rs test)"
33-34
: Ohayo! The metadata_hash field implementation looks solid, sensei!
The addition of metadata_hash
with proper initialization to Felt::ZERO
aligns well with the PR objectives for tracking metadata changes. Let's verify that all constructors properly initialize this field.
Also applies to: 85-85
crates/dojo/world/src/diff/mod.rs (2)
32-33
: Ohayo! The metadata_hash field addition aligns with PR objectives
The addition of metadata_hash
to WorldStatusInfo
enables tracking metadata changes between local and remote worlds, which directly supports the PR's goal of selective metadata uploads.
123-123
: Verify metadata hash propagation in remote world
The code correctly propagates the remote world's metadata hash, but let's verify there are no edge cases where this hash might be uninitialized or invalid.
✅ Verification successful
Let me gather more information about the metadata hash initialization and usage.
Metadata hash handling is properly implemented, sensei! ✨
The code analysis shows that metadata_hash is handled safely throughout the codebase:
- It's properly propagated through WorldRemote for Contract, Model, and Event resources
- Namespace resources explicitly return Felt::ZERO as their metadata hash
- Tests verify the metadata hash updates and propagation
- The hash is properly serialized/deserialized in the contract ABIs
The current implementation in the diff module correctly uses the remote world's metadata hash, which is consistent with the rest of the codebase's handling.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential uninitialized metadata hash cases
# Look for places where WorldRemote is constructed without setting metadata_hash
# Search for WorldRemote struct construction
ast-grep --pattern 'WorldRemote {
$$$
metadata_hash: $_,
$$$
}'
# Search for potential metadata_hash assignments
rg -A 3 'metadata_hash\s*='
Length of output: 1523
Script:
#!/bin/bash
# Check for WorldRemote struct definition and initialization
ast-grep --pattern 'struct WorldRemote {
$$$
}'
# Look for metadata_hash field initialization in constructors/new functions
ast-grep --pattern 'fn new($$$) -> WorldRemote {
$$$
}'
# Search for any direct metadata_hash field access
rg -A 2 '\.metadata_hash'
Length of output: 4997
crates/dojo/world/src/remote/events_to_remote.rs (1)
58-58
: LGTM! Event selector properly added
Ohayo! The MetadataUpdate event selector is correctly added to enable filtering of metadata update events.
crates/dojo/core/src/world/world_contract.cairo (2)
114-115
: Ohayo! The MetadataUpdate event structure enhancement looks good, sensei!
The addition of the hash
field enables efficient tracking of metadata changes, which directly addresses the PR objectives for uploading metadata only when changed.
360-364
: The MetadataUpdate event emission is properly implemented!
The event emission correctly includes all fields including the new hash
field, maintaining consistency with the event structure.
crates/dojo/world/src/contracts/abigen/world.rs (4)
Line range hint 394-428
: Ohayo! The MetadataUpdate struct changes look good, sensei!
The addition of the hash
field to track metadata changes is well implemented with proper serialization and deserialization support. This change enables tracking when metadata actually changes, which aligns with the PR objective.
Line range hint 632-665
: The ResourceMetadata struct changes are solid, sensei!
The addition of the metadata_hash
field provides a way to track the state of metadata, complementing the MetadataUpdate changes. The serialization and deserialization implementations are correct.
1804-1815
: Event handling updates are properly implemented, sensei!
The MetadataUpdate event deserialization has been correctly updated to handle the new hash field in both TryFrom trait implementations. The error handling and deserialization logic are consistent.
Also applies to: 2684-2695
394-396
: Verify consistent usage of hash fields across the codebase
Let's ensure the new hash fields are used consistently in all metadata-related operations.
Also applies to: 632-633
✅ Verification successful
Based on the search results, I can see that the hash field is consistently used across the codebase:
- The
ResourceMetadata
struct in world.rs properly includes themetadata_hash
field - The metadata hash is properly handled in metadata updates through
MetadataUpdate
events - The hash field is consistently used in remote resources tracking via
metadata_hash
field - There are proper test cases verifying metadata hash updates
- The hash field is properly integrated with the world state diff tracking
The hash field implementation is consistent across the codebase
The changes are properly integrated with all metadata-related operations including:
- Resource metadata storage
- Metadata updates and events
- Remote state tracking
- World state diffing
- Test coverage
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent usage of metadata hash fields
# Expected: All metadata-related code should handle the hash fields
# Search for metadata-related structs and their usage
rg -A 5 "struct.*Metadata|fn.*metadata" --type rust
# Search for hash field usage in metadata context
rg -A 3 "metadata.*hash|hash.*metadata" --type rust
Length of output: 42698
crates/dojo/world/src/metadata/mod.rs (1)
34-35
: Ensure Felt
conversion is correct and handles large hashes
Ohayo, sensei! When converting new_hash
(a u64
) into a Felt
using Felt::from_raw([0, 0, 0, new_hash])
, ensure that the hash value fits within the field and that the ordering of the array elements correctly represents the intended value.
Let's verify if the conversion aligns with the expected behavior. Consider running a test to confirm that the Felt
value accurately represents the u64
hash.
crates/dojo/world/src/metadata/metadata_test.rs (1)
1-192
: Ohayo, sensei! The tests are well-structured and comprehensive
The implementation of test_world_metadata
and test_resource_metadata
effectively verifies the upload and retrieval of metadata. The use of helper functions like build_world_metadata
and assert_ipfs_content
enhances readability and maintainability.
crates/sozo/ops/src/migrate/mod.rs (2)
109-164
: Function upload_metadata
is well-implemented
Ohayo, sensei! The new upload_metadata
method properly handles metadata uploads for the world, contracts, models, and events. The integration with the invoker pattern and conditional multicall execution is efficient.
121-121
: Verify the correctness of using Felt::ZERO
for the world resource ID
Ohayo, sensei! Ensure that Felt::ZERO
is indeed the correct identifier for the world resource when retrieving and updating metadata. Verifying this helps prevent potential mismatches or errors.
Update the Metadata page according to dojoengine/dojo#2691.
091964a
to
e8a1bac
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (12)
crates/dojo/world/src/metadata/mod.rs (1)
27-45
: Enhance error handling with specific error typesOhayo! While using anyhow::Result is convenient, it might be better to define specific error types for different failure scenarios:
- IPFS upload failures
- File reading errors
- Hash computation errors
Consider creating a custom error enum:
#[derive(Debug, thiserror::Error)] pub enum MetadataError { #[error("Failed to upload to IPFS: {0}")] IpfsUpload(#[from] ipfs::Error), #[error("Failed to read file: {0}")] FileRead(#[from] std::io::Error), #[error("Hash computation failed: {0}")] HashComputation(String), } pub type Result<T> = std::result::Result<T, MetadataError>;crates/dojo/world/src/config/metadata_config.rs (1)
39-96
: Ohayo, sensei! Consider implementing a common trait for file hashingSince both
WorldMetadata
andResourceMetadata
share similar file hashing logic, consider extracting this functionality into a common trait to promote code reuse and ensure consistent error handling across all implementations.Example trait design:
trait FileHasher { fn hash_file<H: Hasher>(&self, path: &std::path::Path, state: &mut H) { if let Ok(data) = std::fs::read(path) { data.hash(state); } else { eprintln!("Warning: Unable to read file for hashing: {}", path.display()); } } }crates/sozo/ops/src/tests/migration.rs (3)
42-42
: Consider enhancing error handling in metadata upload.Ohayo sensei! While the implementation looks good, the error messages could be more descriptive to help with debugging.
Consider this enhancement:
- let res = migration.migrate(&mut ui).await.expect("Migration spawn-and-move failed."); + let res = migration.migrate(&mut ui).await + .context("Failed to migrate spawn-and-move world")?; if with_metadata { - migration.upload_metadata(&mut ui).await.expect("Upload metadata failed"); + migration.upload_metadata(&mut ui).await + .context("Failed to upload world metadata after successful migration")?; }Also applies to: 63-69
89-111
: Add documentation for the check_resources helper function.Ohayo! The helper function is well-implemented, but it would benefit from documentation explaining:
- Purpose and usage
- Parameter descriptions
- Expected behavior of the checker function
Consider adding this documentation:
+/// Validates metadata for a list of resources against expected criteria +/// +/// # Arguments +/// * `diff` - The WorldDiff containing the resources to check +/// * `resources` - Optional list of ResourceConfig to validate +/// * `expected_count` - Expected number of resources +/// * `checker` - Function to validate the metadata hash of each resource fn check_resources( diff: &WorldDiff, resources: Option<Vec<ResourceConfig>>, expected_count: usize, checker: &dyn Fn(Felt) -> bool, ) {
113-150
: Test implementation is thorough but could be more explicit.Ohayo sensei! The test covers the key scenarios well, but consider these enhancements:
- Add explicit assertions for specific metadata values, not just presence/absence
- Test error cases (e.g., failed uploads)
Also, the test aligns perfectly with the PR objectives mentioned in #2096, ensuring metadata can be uploaded after initial deployment.
crates/dojo/world/src/metadata/metadata_test.rs (2)
14-64
: Ohayo! Consider improving test data handling, sensei.The helper functions could be more robust with these improvements:
- Extract hardcoded file paths into constants
- Add validation for social media URLs
- Consider using a test fixture or configuration file for test data
Here's a suggested improvement:
+ const TEST_DATA_DIR: &str = "./src/metadata/metadata_test_data"; + const TEST_COVER_PATH: &str = "./src/metadata/metadata_test_data/cover.png"; + const TEST_ICON_PATH: &str = "./src/metadata/metadata_test_data/icon.png"; fn build_world_metadata() -> WorldMetadata { - cover_uri: Some(Uri::File( - fs::canonicalize( - PathBuf::from_str("./src/metadata/metadata_test_data/cover.png").unwrap(), - ) - .unwrap(), - )), + cover_uri: Some(Uri::File( + fs::canonicalize(PathBuf::from_str(TEST_COVER_PATH).unwrap()) + .expect("Test cover image not found") + )),
66-138
: Ohayo! Great test coverage, but let's make it even better, sensei!The test thoroughly validates the happy path, but consider adding:
- Negative test cases (invalid URIs, network failures)
- Edge cases (empty fields, very large metadata)
Regarding the TODO on line 137:
Would you like me to help implement a mock IpfsClient for more reliable testing? This would:
- Remove external IPFS dependency
- Allow testing network failure scenarios
- Speed up test execution
crates/dojo/world/src/diff/compare.rs (1)
Line range hint
19-29
: Ohayo! Critical issue: Metadata changes are not detected in comparison logicThe comparison implementations for ContractLocal, ModelLocal, and EventLocal only check class_hash but ignore metadata_hash changes. This doesn't align with the PR's objective of properly handling metadata updates.
Here's how to fix the ContractLocal implementation (apply similar changes to ModelLocal and EventLocal):
impl ComparableResource for ContractLocal { fn compare(self, remote: ResourceRemote) -> ResourceDiff { let remote_contract = remote.as_contract_or_panic(); - if self.common.class_hash == remote_contract.common.current_class_hash() { + if self.common.class_hash == remote_contract.common.current_class_hash() + && self.common.metadata_hash == remote_contract.common.metadata_hash { ResourceDiff::Synced(ResourceLocal::Contract(self), remote) } else { ResourceDiff::Updated(ResourceLocal::Contract(self), remote) } } }Also applies to: 30-40, 41-51
crates/dojo/world/src/config/profile_config.rs (2)
21-23
: Consider adding field-level documentation, sensei!While the struct is well-documented, adding field-level documentation for the new metadata fields would help users understand their purpose and usage better.
Example addition:
#[derive(Debug, Clone, Default, Deserialize)] pub struct ProfileConfig { pub world: WorldConfig, + /// Metadata configurations for models in the world pub models: Option<Vec<ResourceConfig>>, + /// Metadata configurations for contracts in the world pub contracts: Option<Vec<ResourceConfig>>, + /// Metadata configurations for events in the world pub events: Option<Vec<ResourceConfig>>,
212-231
: Consider adding edge case tests, sensei!While the happy path is well tested, consider adding tests for:
- Empty vectors for models/contracts/events
- Invalid URI formats
- Duplicate tags within the same resource type
Example test case:
#[test] fn test_profile_config_empty_resources() { let content = r#" [world] name = "test" seed = "abcd" [namespace] default = "test" [[models]] [[contracts]] [[events]] "#; let config = toml::from_str::<ProfileConfig>(content).unwrap(); assert!(config.models.unwrap().is_empty()); assert!(config.contracts.unwrap().is_empty()); assert!(config.events.unwrap().is_empty()); }crates/sozo/ops/src/migrate/mod.rs (2)
109-164
: Ohayo, sensei! Ensure calls exist before invokingTo prevent unnecessary operations when there are no metadata changes, consider checking if there are any invocations before performing multicall or sequential calls. This enhances efficiency and avoids redundant actions.
Apply this diff to add the check:
if !invoker.calls.is_empty() { + if self.do_multicall() { ui.update_text_boxed(format!("Uploading {} metadata...", invoker.calls.len())); invoker.multicall().await.map_err(|e| anyhow!(e.to_string()))?; } else { ui.update_text_boxed(format!( "Uploading {} metadata (sequentially)...", invoker.calls.len() )); invoker.invoke_all_sequentially().await.map_err(|e| anyhow!(e.to_string()))?; } + } else { + ui.update_text("No metadata changes detected. Skipping upload."); + }
166-192
: Ohayo, sensei! Parallelize metadata uploads for better performanceThe metadata uploads in
upload_metadata_from_resource_config
are currently processed sequentially. To optimize performance, especially with numerous resources, consider parallelizing the uploads using asynchronous tasks.You can refactor the function to process uploads concurrently:
async fn upload_metadata_from_resource_config( &self, config: &[ResourceConfig], ) -> anyhow::Result<Vec<Call>> { let futures = config.iter().map(|item| async { let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag); let current_hash = self .diff .resources .get(&selector) .map_or(Felt::ZERO, |r| r.metadata_hash()); let new_metadata = metadata_config::ResourceMetadata::from(item.clone()); let res = new_metadata.upload_if_changed(current_hash).await?; if let Some((new_uri, new_hash)) = res { Ok(Some(self.world.set_metadata_getcall(&ResourceMetadata { resource_id: selector, metadata_uri: ByteArray::from_string(&new_uri)?, metadata_hash: new_hash, }))) } else { Ok(None) } }); let results = futures::future::join_all(futures).await; let mut calls = Vec::new(); for result in results { if let Ok(Some(call)) = result { calls.push(call); } } Ok(calls) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (4)
Cargo.lock
is excluded by!**/*.lock
crates/dojo/world/src/metadata/metadata_test_data/icon.png
is excluded by!**/*.png
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (27)
bin/sozo/src/commands/migrate.rs
(1 hunks)crates/dojo/core-cairo-test/src/tests/world/metadata.cairo
(6 hunks)crates/dojo/core/src/model/metadata.cairo
(1 hunks)crates/dojo/core/src/world/world_contract.cairo
(2 hunks)crates/dojo/world/Cargo.toml
(1 hunks)crates/dojo/world/src/config/metadata_config.rs
(2 hunks)crates/dojo/world/src/config/mod.rs
(1 hunks)crates/dojo/world/src/config/profile_config.rs
(4 hunks)crates/dojo/world/src/config/resource_config.rs
(1 hunks)crates/dojo/world/src/contracts/abigen/world.rs
(8 hunks)crates/dojo/world/src/diff/compare.rs
(3 hunks)crates/dojo/world/src/diff/mod.rs
(3 hunks)crates/dojo/world/src/diff/resource.rs
(1 hunks)crates/dojo/world/src/metadata/ipfs.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test_data/abi.json
(0 hunks)crates/dojo/world/src/metadata/metadata_test_data/source.cairo
(0 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/dojo/world/src/metadata/world.rs
(0 hunks)crates/dojo/world/src/remote/events_to_remote.rs
(3 hunks)crates/dojo/world/src/remote/mod.rs
(1 hunks)crates/dojo/world/src/remote/resource.rs
(3 hunks)crates/dojo/world/src/uri.rs
(3 hunks)crates/sozo/ops/Cargo.toml
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)examples/spawn-and-move/dojo_dev.toml
(2 hunks)
💤 Files with no reviewable changes (3)
- crates/dojo/world/src/metadata/metadata_test_data/abi.json
- crates/dojo/world/src/metadata/metadata_test_data/source.cairo
- crates/dojo/world/src/metadata/world.rs
🚧 Files skipped from review as they are similar to previous changes (15)
- bin/sozo/src/commands/migrate.rs
- crates/dojo/core-cairo-test/src/tests/world/metadata.cairo
- crates/dojo/core/src/model/metadata.cairo
- crates/dojo/core/src/world/world_contract.cairo
- crates/dojo/world/Cargo.toml
- crates/dojo/world/src/config/mod.rs
- crates/dojo/world/src/config/resource_config.rs
- crates/dojo/world/src/diff/mod.rs
- crates/dojo/world/src/diff/resource.rs
- crates/dojo/world/src/metadata/ipfs.rs
- crates/dojo/world/src/remote/events_to_remote.rs
- crates/dojo/world/src/remote/mod.rs
- crates/dojo/world/src/remote/resource.rs
- crates/sozo/ops/Cargo.toml
- examples/spawn-and-move/dojo_dev.toml
🔇 Additional comments (20)
crates/dojo/world/src/metadata/mod.rs (1)
69-84
: 🛠️ Refactor suggestion
Extract common functionality and maintain consistency
Ohayo! There's significant code duplication between WorldMetadata and ResourceMetadata implementations. Also, ResourceMetadata handles only icon_uri while WorldMetadata handles both icon and cover.
Consider extracting common functionality:
async fn upload_file_to_ipfs(
file_path: &Path,
ipfs_client: &IpfsClient
) -> Result<String> {
if !file_path.exists() {
return Err(anyhow!("File not found: {}", file_path.display()));
}
let data = std::fs::read(file_path)?;
let uri = ipfs_client.upload(data).await?;
// Verify upload
let verified_data = ipfs_client.cat(&uri).await?;
if verified_data != data {
return Err(anyhow!("Upload verification failed"));
}
Ok(uri)
}
#[async_trait]
impl MetadataStorage for ResourceMetadata {
async fn upload(&self, ipfs_client: &IpfsClient) -> Result<String> {
let mut meta = self.clone();
if let Some(Uri::File(icon)) = &self.icon_uri {
meta.icon_uri = Some(Uri::Ipfs(upload_file_to_ipfs(icon, ipfs_client).await?));
};
let serialized = json!(meta).to_string();
ipfs_client.upload(serialized).await
}
}
Let's verify if ResourceMetadata should handle cover_uri:
crates/dojo/world/src/config/metadata_config.rs (4)
48-48
: Ohayo, sensei! Previous review comment about HashMap hashing is still valid
The current implementation of hashing the socials
HashMap could produce inconsistent hashes due to non-deterministic key ordering. Please refer to the previous review comment for the solution that involves sorting keys before hashing.
52-60
: Ohayo, sensei! Previous review comment about error handling is still valid
The use of expect()
for file operations can cause panics. Please refer to the previous review comment for a more graceful error handling solution.
64-81
: Ohayo, sensei! Clean implementation of ResourceMetadata
The struct and its From implementation are well-designed with clear documentation and appropriate use of the naming utility.
91-94
: Ohayo, sensei! Previous review comment about error handling applies here too
The same file reading error handling issue exists in the ResourceMetadata Hash implementation. Please refer to the previous review comment for the graceful error handling solution.
crates/dojo/world/src/uri.rs (2)
60-68
: Ohayo! Clean Display implementation!
The Display implementation is well-crafted:
- Safe handling of non-UTF8 paths using
to_string_lossy()
- Efficient direct formatting for URLs and IPFS URIs
Line range hint 21-68
: Verify integration with metadata handling
Let's ensure the new Hash and Display implementations align with the metadata upload changes mentioned in the PR objectives.
✅ Verification successful
Ohayo sensei! After reviewing the implementation, I can now provide a definitive response about the Hash implementation for Uri.
Hash implementation is correctly aligned with serialization and metadata handling
The Hash implementation for Uri is consistent with its usage in metadata handling and follows best practices:
- The Hash implementation correctly handles all Uri variants (Http, Ipfs, File) by hashing their string representations
- This aligns with the serialization implementation which uses the same string representations
- The hashing is used properly in metadata handling through
compute_metadata_hash
function - The implementation is used consistently across the codebase for resource metadata tracking
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify usage of Uri hashing in metadata-related code
# Search for metadata structs using Uri
rg --type rust "struct.*Metadata.*Uri" -C 5
# Search for metadata upload functions using Uri
rg --type rust "upload.*metadata.*Uri" -C 5
# Search for Hash derivations on metadata structs
ast-grep --pattern 'derive($$$, Hash, $$$)
struct $_Metadata'
Length of output: 155
Script:
#!/bin/bash
# Let's search for Uri usage in the codebase more broadly
# Find Uri struct definition and its usages
rg --type rust "enum Uri" -B 2 -A 5
# Search for metadata-related code that might use Uri
rg --type rust "Metadata" -B 2 -A 5
# Look for Uri serialization/deserialization usage
rg --type rust "Uri::from_string|Uri::serialize" -B 2 -A 5
# Search for tests involving Uri
rg --type rust "test.*Uri" -B 2 -A 5
Length of output: 112875
Script:
#!/bin/bash
# Let's search for the Uri implementation and its usage in metadata handling
# Find Uri implementation and its methods
rg --type rust "impl.*Uri" -B 2 -A 10
# Search for Uri usage in metadata-related code
rg --type rust "Uri.*metadata" -B 2 -A 5
# Look for Uri serialization tests
rg --type rust "test.*Uri.*serialize" -B 2 -A 5
# Search for metadata hash implementations
rg --type rust "metadata_hash" -B 2 -A 5
Length of output: 22749
crates/sozo/ops/src/tests/migration.rs (1)
8-8
: LGTM! New imports align with metadata handling requirements.
Ohayo! The added imports for ResourceConfig
and Felt
are well-chosen for handling metadata and hash comparisons.
Also applies to: 16-16
crates/dojo/world/src/metadata/metadata_test.rs (1)
1-192
: Ohayo! Overall excellent test coverage for the metadata changes, sensei!
The tests effectively validate the new metadata upload behavior, ensuring that:
- Metadata is uploaded only when changed
- Hashes are correctly tracked
- IPFS storage and retrieval work as expected
This aligns perfectly with the PR objectives to fix issues #2096 and #1907.
crates/dojo/world/src/diff/compare.rs (2)
111-111
: Enhance test coverage for metadata changes, sensei!
The existing comment from the previous review is still valid. The tests include metadata_hash but don't verify its impact on comparison results. We need test cases that specifically verify metadata-only changes:
- Different metadata_hash but same class_hash
- Different metadata_hash and different class_hash
- Same metadata_hash but different class_hash
Also applies to: 147-147, 197-197
Line range hint 1-215
: Verify metadata_hash field availability
Let's ensure the metadata_hash field exists in the required structures.
crates/dojo/world/src/config/profile_config.rs (3)
12-12
: Ohayo! Import looks good, sensei!
The ResourceConfig import is properly organized with related configuration imports.
141-154
: Test data structure looks good, sensei!
The TOML configuration test data is well-structured and covers the new metadata fields appropriately.
21-23
: Verify metadata handling behavior, sensei!
The new fields enable better metadata management, aligning with PR objectives. Let's verify the metadata handling implementation:
✅ Verification successful
Based on the search results, I can see comprehensive metadata handling implementation across the codebase, particularly in the world configuration and resource management. The code shows proper metadata structures and upload functionality. Let me verify one more aspect about the actual metadata upload implementation:
Metadata implementation looks solid, sensei! ✨
The codebase shows robust metadata handling with:
- Complete metadata structures for world and resources
- Proper upload functionality with IPFS integration
- Comprehensive test coverage including world metadata, resource metadata, and event handling
- Consistent metadata storage and retrieval across the system
The new fields align perfectly with the existing metadata architecture and follow established patterns for resource configuration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent metadata handling across the codebase
# Look for metadata-related functions that should handle these new fields
rg -A 5 "metadata" --type rust
# Check for any existing metadata upload implementations
ast-grep --pattern 'fn $upload_metadata($$$) {
$$$
}'
Length of output: 152061
Script:
#!/bin/bash
# Check for metadata upload implementation in world config
ast-grep --pattern 'impl $_ {
$$$
pub async fn upload($$$) {
$$$
}
$$$
}'
# Look for metadata storage/handling tests
rg "test.*metadata" --type rust
Length of output: 590
crates/dojo/world/src/contracts/abigen/world.rs (4)
396-396
: Ohayo! Nice addition of the hash field for metadata change tracking.
The new hash
field in the MetadataUpdate
struct enables efficient tracking of metadata changes, which aligns perfectly with the PR's objective of uploading metadata only when necessary.
1804-1815
: Ohayo! Clean implementation of metadata hash deserialization.
The deserialization logic for the new hash field follows best practices with proper error handling and consistent implementation patterns.
633-633
: Ohayo! Let's verify the metadata hash handling.
The addition of metadata_hash
to ResourceMetadata
is a solid improvement. However, we should verify that this hash is being properly utilized in the metadata update process.
✅ Verification successful
Based on the search results, I can now generate the final response as the metadata hash implementation is well verified:
Ohayo! The metadata hash implementation looks solid, sensei!
The metadata_hash
field is properly integrated across the codebase:
- Used in change detection via
compute_metadata_hash
function - Properly serialized/deserialized in the Cairo contract interface
- Correctly handled in resource updates through
set_metadata_hash
andmetadata_hash
methods - Verified by test cases checking hash state transitions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how metadata_hash is used in metadata updates
# Search for metadata hash comparisons
rg -A 3 "metadata_hash" --type rust
# Look for metadata update implementations
ast-grep --pattern 'fn $update_name($params) {
$$$
metadata_hash
$$$
}'
Length of output: 10499
662-664
: Sensei, let's ensure hash computation consistency.
While the deserialization implementation looks good, we should verify that the hash computation remains consistent across all code paths that update metadata.
✅ Verification successful
Let me generate additional scripts to check for hash computation implementation details.
Ohayo sensei! The hash computation is consistent and well-implemented.
The verification shows that:
- There's a single, centralized
compute_metadata_hash<T>
function incrates/dojo/world/src/metadata/mod.rs
that handles all metadata hash computations - It uses Rust's standard
Hash
trait andDefaultHasher
, ensuring consistent hash computation across all types - The hash is properly converted to a
Felt
type withFelt::from_raw([0, 0, 0, hash])
- All metadata hash updates flow through this single implementation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent hash computation across the codebase
# Look for hash computation implementations
ast-grep --pattern 'fn $compute_name($params) {
$$$
hash
$$$
}'
# Search for direct hash assignments
rg -A 3 "metadata_hash.*=" --type rust
Length of output: 10499
Script:
#!/bin/bash
# Check for hash computation implementation details
# Look for the compute_metadata_hash function implementation
ast-grep --pattern 'fn compute_metadata_hash<$_>($params) {
$$$
}'
# Search for metadata hash computation related code
rg -A 5 "compute_metadata_hash" --type rust
# Look for any hash-related trait implementations
ast-grep --pattern 'impl Hash for $type {
$$$
}'
Length of output: 1011
crates/sozo/ops/src/migrate/mod.rs (2)
23-23
: Ohayo, sensei! Importing anyhow
for enhanced error handling
Good job on importing anyhow::anyhow
; it improves the consistency of error handling throughout the codebase.
109-164
: Ohayo, sensei! Excellent integration of metadata upload functionality
The new upload_metadata
method is well-structured and effectively integrates metadata uploading into the migration process. It enhances the robustness of the system by ensuring metadata consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (13)
crates/dojo/world/src/metadata/metadata_service.rs (1)
1-4
: Ohayo! The trait definition looks solid, sensei! 🎋The trait is well-designed with appropriate marker traits (Send + Sync + Unpin) making it safe for concurrent usage. The use of
anyhow::Result
provides flexible error handling which is crucial for metadata operations that might fail.Consider documenting the trait with rustdoc comments to explain:
- The purpose of the service
- The thread-safety guarantees
- The relationship with IPFS
crates/dojo/world/src/metadata/fake_metadata_service.rs (3)
1-11
: Ohayo! The imports and struct definition look solid, sensei!The structure is clean and well-organized. However, it would be helpful to add documentation comments explaining the purpose of this fake service and its role in testing.
Consider adding documentation like this:
+/// A mock implementation of MetadataService for testing purposes. +/// Stores metadata in memory using a HashMap instead of making actual IPFS calls. pub struct FakeMetadataService { data: HashMap<String, Vec<u8>>, }
13-17
: Constructor looks good, sensei!Consider adding the
#[must_use]
attribute since this function creates a new value.impl FakeMetadataService { + #[must_use] pub fn new() -> Self { Self { data: HashMap::new() } } }
32-35
: Simplify the get() method implementation, sensei!The current implementation can be made more idiomatic using
cloned()
.#[cfg(test)] async fn get(&self, uri: String) -> Result<Vec<u8>> { - Ok(self.data.get(&uri).map(|x| x.clone()).unwrap_or(Vec::<u8>::new())) + Ok(self.data.get(&uri).cloned().unwrap_or_default()) }crates/dojo/world/src/metadata/metadata_test.rs (3)
16-52
: Ohayo! Consider extracting common path handling logic, sensei.The builder functions are well-structured, but there's repeated path handling logic. Consider creating a helper function:
fn test_file_path(filename: &str) -> PathBuf { fs::canonicalize( PathBuf::from_str(&format!("./src/metadata/metadata_test_data/{}", filename)).unwrap() ).unwrap() }Then use it in both builders:
icon_uri: Some(Uri::File(test_file_path("icon.png"))),
66-140
: Consider adding edge cases for more robust testing, sensei.The test has good coverage of the happy path and basic change detection. However, consider adding:
- Test with invalid/corrupted metadata
- Test with non-existent files
- Test concurrent uploads of the same metadata
Example for invalid metadata test:
#[tokio::test] async fn test_world_metadata_invalid() { let mut metadata_service = FakeMetadataService::new(); let mut invalid_metadata = build_world_metadata(); invalid_metadata.cover_uri = Some(Uri::File(PathBuf::from("/nonexistent/path"))); let res = invalid_metadata .upload_if_changed(&mut metadata_service, Felt::ZERO) .await; assert!(res.is_err()); }
142-197
: Enhance assertions for resource metadata validation, sensei.While the test structure is solid, consider strengthening the validation by:
- Adding assertions for the exact IPFS URI format (not just the prefix)
- Verifying the content-type of the uploaded icon
- Checking the size limits of the uploaded metadata
Example enhancement:
fn assert_ipfs_uri_format(uri: &str) { assert!(uri.starts_with("ipfs://")); assert_eq!(uri.len(), 46); // IPFS CIDv0 length assert!(uri[7..].chars().all(|c| c.is_ascii_hexdigit())); }crates/sozo/ops/src/migrate/mod.rs (1)
158-158
: Consider enhancing error messageOhayo! The error message could be more descriptive to help with debugging.
- invoker.multicall().await.map_err(|e| anyhow!(e.to_string()))?; + invoker.multicall().await.map_err(|e| anyhow!("Failed to upload metadata via multicall: {}", e))?;crates/dojo/world/src/metadata/ipfs_service.rs (2)
9-11
: Consider Making IPFS Client ConfigurableHardcoding the IPFS client URL may limit flexibility across different environments.
Consider loading
IPFS_CLIENT_URL
from an environment variable or a configuration file to enhance configurability.
18-23
: Enhance Error Handling innew
MethodErrors during client initialization may lack sufficient context for debugging.
Consider adding more descriptive error messages to improve troubleshootability.
Apply this diff:
impl IpfsMetadataService { pub fn new() -> Result<Self> { dotenv().ok(); let client_url = env::var("IPFS_CLIENT_URL").unwrap_or_else(|_| "https://ipfs.infura.io:5001".to_string()); let username = env::var("IPFS_USERNAME")?; let password = env::var("IPFS_PASSWORD")?; - Ok(Self { - client: ipfs_api_backend_hyper::IpfsClient::from_str(&client_url)? - .with_credentials(&username, &password), - }) + let client = ipfs_api_backend_hyper::IpfsClient::from_str(&client_url) + .map_err(|e| anyhow::anyhow!("Failed to create IPFS client: {}", e))? + .with_credentials(&username, &password); + Ok(Self { client }) } }crates/dojo/world/src/metadata/metadata.rs (3)
12-19
: Ohayo, sensei! Consider passingdata
by reference incompute_metadata_hash
Passing
data
by value might lead to unnecessary copies ifT
is a large type. It's more efficient to passdata
as a reference.Apply this diff to update the function signature:
- fn compute_metadata_hash<T>(data: T) -> u64 + fn compute_metadata_hash<T>(data: &T) -> u64And update the call site accordingly:
- let new_hash = compute_metadata_hash(self); + let new_hash = compute_metadata_hash(self);
50-58
: Ohayo, sensei! Refactor duplicated code for uploading URIsThe code for uploading
icon_uri
andcover_uri
is duplicated. Extracting a helper function can reduce repetition and improve maintainability.For example, create a helper function:
async fn upload_uri( uri_option: &Option<Uri>, service: &mut impl MetadataService, ) -> Result<Option<Uri>> { if let Some(Uri::File(path)) = uri_option { let data = std::fs::read(path)?; let uploaded_uri = Uri::Ipfs(service.upload(data).await?); Ok(Some(uploaded_uri)) } else { Ok(uri_option.clone()) } }Then update the
upload
method:- if let Some(Uri::File(icon)) = &self.icon_uri { - let icon_data = std::fs::read(icon)?; - meta.icon_uri = Some(Uri::Ipfs(service.upload(icon_data).await?)); - }; + meta.icon_uri = upload_uri(&self.icon_uri, service).await?; - if let Some(Uri::File(cover)) = &self.cover_uri { - let cover_data = std::fs::read(cover)?; - meta.cover_uri = Some(Uri::Ipfs(service.upload(cover_data).await?)); - }; + meta.cover_uri = upload_uri(&self.cover_uri, service).await?;
68-77
: Ohayo, sensei! Refactor duplicated code inResourceMetadata
Similar to
WorldMetadata
, the code for uploadingicon_uri
can be refactored to use the helper function, reducing duplication.Update the
upload
method inResourceMetadata
:- if let Some(Uri::File(icon)) = &self.icon_uri { - let icon_data = std::fs::read(icon)?; - meta.icon_uri = Some(Uri::Ipfs(service.upload(icon_data).await?)); - }; + meta.icon_uri = upload_uri(&self.icon_uri, service).await?;Note: Ensure the helper function
upload_uri
is accessible to both implementations.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (8)
bin/sozo/src/commands/migrate.rs
(1 hunks)crates/dojo/world/src/metadata/fake_metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/ipfs_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- bin/sozo/src/commands/migrate.rs
- crates/dojo/world/src/metadata/mod.rs
🔇 Additional comments (5)
crates/dojo/world/src/metadata/metadata_service.rs (1)
5-9
: The method signatures align well with the metadata upload requirements!
The async upload
method will help solve the metadata upload issues after failed deployments (issue #2096). The test-only get
method is appropriately gated behind #[cfg(test)]
.
Let's verify if there are any existing implementations of this trait:
✅ Verification successful
Ohayo sensei! The trait implementations are perfectly aligned with the requirements! ✨
Both IpfsMetadataService
and FakeMetadataService
correctly implement:
- The async
upload
method with the required signature - The test-only
get
method (properly gated with#[cfg(test)]
inIpfsMetadataService
)
The implementations are consistent and ready to handle metadata upload functionality for issue #2096.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find implementations of MetadataService trait
# Expected: Should find at least IpfsMetadataService and test implementations
# Search for struct/class implementations
ast-grep --pattern 'impl $_ for $_ where $$$'
# Search for direct implementations
rg "impl.*MetadataService.*for"
Length of output: 278
Script:
#!/bin/bash
# Let's check the implementations to verify they align with the trait definition
rg -B 5 -A 10 "impl MetadataService for" crates/dojo/world/src/metadata/ipfs_service.rs crates/dojo/world/src/metadata/fake_metadata_service.rs
Length of output: 2645
crates/sozo/ops/src/migrate/mod.rs (3)
23-23
: LGTM! Well-organized imports
The new imports are properly organized and necessary for the metadata handling functionality.
Also applies to: 27-28, 32-32
114-168
: LGTM! Well-structured metadata upload implementation
The implementation follows good practices:
- Checks existing metadata hash before uploading
- Uses batch transactions via multicall for efficiency
- Properly handles different resource types (world, contracts, models, events)
170-197
: LGTM! Efficient change detection implementation
The implementation efficiently handles metadata updates by:
- Computing resource selectors
- Checking for changes before uploading
- Batching calls for efficiency
crates/dojo/world/src/metadata/ipfs_service.rs (1)
35-43
: Ensure get
Method is Only Used in Tests
The get
method is conditionally compiled with #[cfg(test)]
, restricting it to test builds.
This is a good practice to prevent test-only code from being included in production builds.
aeb376f
to
7c34cc3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (10)
crates/dojo/world/src/metadata/ipfs_service.rs (2)
14-19
: Ohayo, sensei! Consider enhancing Debug implementationThe current Debug implementation doesn't provide any useful information. Consider including the client URL for better debugging experience.
impl std::fmt::Debug for IpfsMetadataService { - fn fmt(&self, _f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { - Ok(()) + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { + write!(f, "IpfsMetadataService {{ client_url: {} }}", self.client.base_url()) } }
39-47
: Ohayo, sensei! Improve URI validation in get methodThe get method should validate the URI format before processing.
#[cfg(test)] async fn get(&self, uri: String) -> Result<Vec<u8>> { + if !uri.starts_with("ipfs://") { + return Err(anyhow::anyhow!("Invalid IPFS URI format: must start with ipfs://")); + } let res = self .client .cat(&uri.replace("ipfs://", "")) .map_ok(|chunk| chunk.to_vec()) .try_concat() - .await?; + .await + .map_err(|e| anyhow::anyhow!("Failed to retrieve from IPFS: {}", e))?; Ok(res) }bin/sozo/src/commands/migrate.rs (1)
84-90
: Excellent implementation sensei, with a minor suggestionThe metadata service integration looks solid and addresses the core objectives of uploading metadata after failed deployments. The error handling is good but could be more descriptive.
Consider enhancing the error context:
let mut metadata_service = - IpfsMetadataService::new(IPFS_CLIENT_URL, IPFS_USERNAME, IPFS_PASSWORD)?; + IpfsMetadataService::new(IPFS_CLIENT_URL, IPFS_USERNAME, IPFS_PASSWORD) + .context("Failed to initialize IPFS metadata service")?; migration .upload_metadata(&mut spinner, &mut metadata_service) .await - .context("Metadata upload failed.")?; + .context("Failed to upload world metadata to IPFS")?;crates/dojo/world/src/metadata/metadata_test.rs (3)
16-21
: Ohayo! Consider making test file paths more robust, sensei.The hardcoded path
./src/metadata/metadata_test_data/
might break if tests are run from different working directories.Consider using environment variables or relative paths from the crate root:
- fs::canonicalize( - PathBuf::from_str(&format!("./src/metadata/metadata_test_data/{}", filename)).unwrap(), - ) + let test_data_dir = env!("CARGO_MANIFEST_DIR").to_owned() + "/src/metadata/metadata_test_data"; + fs::canonicalize(PathBuf::from_str(&format!("{}/{}", test_data_dir, filename)).unwrap())
58-132
: Enhance test coverage with error cases, sensei.While the happy path is well tested, consider adding test cases for:
- Invalid file paths
- Malformed metadata
- Service failures
Also, consider extracting magic values:
+ const HASH_INCREMENT: Felt = Felt::ONE; - world_metadata.upload_if_changed(&mut metadata_service, current_hash + Felt::ONE).await + world_metadata.upload_if_changed(&mut metadata_service, current_hash + HASH_INCREMENT).await
1-189
: Excellent alignment with PR objectives, sensei! 🎯The tests thoroughly validate the core requirement of uploading metadata only when changes are detected:
- Hash-based change detection
- Proper re-upload behavior
- Comprehensive content verification
Consider adding integration tests that simulate the full deployment workflow, including failure scenarios mentioned in issue #2096.
crates/sozo/ops/src/migrate/mod.rs (2)
114-168
: Consider enhancing error handling and progress trackingOhayo sensei! The metadata upload implementation looks solid, but could benefit from some improvements:
- Consider adding retry logic for metadata service failures
- Add progress tracking for large uploads (e.g., X/Y resources uploaded)
pub async fn upload_metadata( &self, ui: &mut MigrationUi, service: &mut impl MetadataService, ) -> anyhow::Result<()> { ui.update_text("Uploading metadata..."); + let total_resources = self.diff.resources.len(); + let mut uploaded = 0; let mut invoker = Invoker::new(&self.world.account, self.txn_config); // world let current_hash = self.diff.resources.get(&Felt::ZERO).map_or(Felt::ZERO, |r| r.metadata_hash()); let new_metadata = WorldMetadata::from(self.diff.profile_config.world.clone()); - let res = new_metadata.upload_if_changed(service, current_hash).await?; + let res = new_metadata + .upload_if_changed(service, current_hash) + .await + .map_err(|e| anyhow!("Failed to upload world metadata: {}", e))?; + uploaded += 1; + ui.update_text(&format!("Uploaded {}/{} resources...", uploaded, total_resources));
123-154
: Consider architectural optimizations for metadata handlingOhayo sensei! The current implementation is solid, but here are some architectural suggestions to improve performance and scalability:
- Consider implementing parallel uploads for metadata using
futures::stream::StreamExt
- Add caching layer for frequently accessed metadata to reduce IPFS calls
- Implement batch uploads for related resources
Would you like me to provide an example implementation of these optimizations?
crates/dojo/world/src/metadata/metadata_storage.rs (2)
63-64
: Optimize metadata serialization to improve efficiencyOhayo, sensei! Instead of using
json!(meta).to_string()
and then converting it to bytes, you can directly serializemeta
to a byte vector usingserde_json::to_vec(&meta)?
. This approach reduces unnecessary intermediate allocations and improves performance.
75-76
: Optimize metadata serialization to improve efficiencyOhayo, sensei! Similar to the previous suggestion, consider using
serde_json::to_vec(&meta)?
to serializeResourceMetadata
directly to a byte vector. This enhances efficiency by avoiding intermediate string conversions.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (9)
bin/sozo/src/commands/migrate.rs
(3 hunks)crates/dojo/world/src/metadata/fake_metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/ipfs_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_storage.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- crates/dojo/world/src/metadata/fake_metadata_service.rs
- crates/dojo/world/src/metadata/metadata_service.rs
- crates/dojo/world/src/metadata/mod.rs
- crates/sozo/ops/src/tests/migration.rs
🔇 Additional comments (7)
crates/dojo/world/src/metadata/ipfs_service.rs (1)
1-12
: Ohayo, sensei! Clean and well-organized structure!
The imports are minimal and well-organized, and the struct definition follows good practices with clear separation of concerns.
bin/sozo/src/commands/migrate.rs (2)
6-6
: LGTM! Clean import for metadata service
The import is well-placed and properly scoped under the dojo_world module.
87-90
: Verify no duplicate metadata upload implementations
Let's ensure this is the only place handling metadata uploads to avoid potential conflicts.
✅ Verification successful
No duplicate metadata upload implementations found, sensei!
The implementation is clean and well-organized:
- The main
upload_metadata
method inmigrate/mod.rs
is the core implementation - Other occurrences are either:
- The interface definition in
metadata/mod.rs
- Test implementations in
tests/migration.rs
- The actual service implementation in
ipfs_service.rs
- The interface definition in
The TODO comment in build.rs
is unrelated to the upload functionality and just refers to migration process timing.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other metadata upload implementations
echo "Searching for other metadata upload implementations..."
rg -l "upload_metadata|IpfsMetadataService" --type rust
# Search for any TODO comments related to metadata
echo "Checking for metadata-related TODOs..."
rg -i "todo.*metadata" --type rust
Length of output: 602
Script:
#!/bin/bash
# Let's check the actual implementation details of metadata uploads
echo "Checking metadata upload implementations in migration module..."
rg "upload_metadata" -A 5 -B 5 crates/sozo/ops/src/migrate/mod.rs
echo "\nChecking metadata upload implementations in IPFS service..."
rg "upload_metadata" -A 5 -B 5 crates/dojo/world/src/metadata/ipfs_service.rs
echo "\nChecking test implementations..."
rg "upload_metadata" -A 5 -B 5 crates/sozo/ops/src/tests/migration.rs
Length of output: 2659
crates/dojo/world/src/metadata/metadata_test.rs (2)
23-56
: Well-structured test helpers, sensei! 🎉
The helper functions are well-designed:
- Clear separation of concerns
- Reusable assertions
- Consistent test data generation
134-189
: Clean and thorough test implementation, sensei! ✨
The test provides good coverage of the resource metadata functionality with:
- Clear state transitions
- Comprehensive assertions
- Proper error handling checks
crates/sozo/ops/src/migrate/mod.rs (1)
178-179
:
Add error handling for selector computation
Ohayo sensei! This issue was previously identified and still needs to be addressed.
Apply this diff as suggested in the past review:
- let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag);
+ let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag)
+ .map_err(|e| anyhow!("Failed to compute selector for tag {}: {}", item.tag, e))?;
crates/dojo/world/src/metadata/metadata_storage.rs (1)
43-44
: Verify correct conversion from u64
hash to Felt
Ohayo, sensei! When converting the u64
hash to a Felt
using Felt::from_raw([0, 0, 0, new_hash])
, only the lower 64 bits are used. If the full 251-bit range of Felt
is required for collision resistance, consider using a hash function that produces a larger output, such as Sha256
, and appropriately mapping it into a Felt
.
7c34cc3
to
1da4708
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (10)
crates/dojo/world/src/metadata/ipfs_service.rs (2)
16-21
: Ohayo, sensei! Consider enhancing Debug implementationThe current Debug implementation, while satisfying clippy, could be more helpful for debugging purposes. Consider including the client URL in the debug output.
impl std::fmt::Debug for IpfsMetadataService { - fn fmt(&self, _f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { - Ok(()) + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { + write!(f, "IpfsMetadataService {{ client_url: {} }}", self.client.base_url()) } }
51-59
: Ohayo, sensei! Add validation for empty data in upload methodConsider adding validation to prevent uploading empty data to IPFS.
async fn upload(&mut self, data: Vec<u8>) -> Result<String> { + if data.is_empty() { + return Err(anyhow::anyhow!("Cannot upload empty data to IPFS")); + } let reader = Cursor::new(data); let response = self .clientcrates/dojo/world/src/metadata/metadata_storage.rs (3)
39-47
: Consider using streaming for large file uploadsOhayo, sensei! While the current implementation works well for small files, loading the entire file into memory with
std::fs::read
could be problematic for large files. Consider implementing streaming upload support for handling large files efficiently.async fn upload_uri(uri: &Option<Uri>, service: &mut impl MetadataService) -> Result<Option<Uri>> { if let Some(Uri::File(path)) = uri { - let data = std::fs::read(path)?; - let uploaded_uri = Uri::Ipfs(service.upload(data).await?); + let file = tokio::fs::File::open(path).await?; + let uploaded_uri = Uri::Ipfs(service.upload_stream(file).await?); Ok(Some(uploaded_uri)) } else { Ok(uri.clone()) } }
70-87
: Simplify hash comparison logicOhayo, sensei! The current implementation converts the hash to a Felt type for comparison, which adds unnecessary complexity. Consider using the native u64 hash value directly for comparison, as it would be more efficient and clearer.
async fn upload_if_changed( &self, service: &mut impl MetadataService, - current_hash: Felt, + current_hash: u64, ) -> Result<Option<(String, u64)>> { let new_hash = compute_metadata_hash(self); - let new_hash = Felt::from_raw([0, 0, 0, new_hash]); if new_hash != current_hash { let new_uri = self.upload(service).await?; return Ok(Some((new_uri, new_hash))); } Ok(None) }
1-113
: Consider adding monitoring capabilitiesOhayo, sensei! To better understand the system's behavior in production, consider adding monitoring capabilities:
- Track metadata upload latency
- Monitor hash computation time
- Count upload successes and failures
- Track file sizes being processed
This would help identify performance bottlenecks and potential issues early.
crates/sozo/ops/src/tests/migration.rs (3)
43-43
: The migration function changes look solid, sensei!The addition of the
with_metadata
parameter and conditional upload block implements the core functionality needed to address issues #2096 and #1907. The error handling is descriptive and helpful.Consider adding a doc comment for the
with_metadata
parameter to clarify its purpose:/// Migrates the spawn-and-move project from the local environment. + /// + /// # Arguments + /// * `sequencer` - The runner context for the test + /// * `with_metadata` - When true, uploads world/models/contracts metadata after migrationAlso applies to: 64-70
77-77
: Test updates maintain existing behavior correctly!The existing tests have been properly updated to pass
false
for thewith_metadata
parameter, ensuring they maintain their original behavior.Consider adding negative test cases to verify that metadata is not uploaded when
with_metadata
is false:assert!(is_not_set(world_diff.world_info.metadata_hash));Also applies to: 86-89
91-113
: Excellent helper function implementation, sensei!The
check_resources
helper is well-structured and provides good reusability for metadata verification.Consider making the error message more specific by including the expected and actual hash values:
- assert!(checker(resource.metadata_hash()), "Bad resource hash: {}", resource.name()); + assert!(checker(resource.metadata_hash()), "Invalid metadata hash for resource {}: expected {} to be {}", + resource.name(), + resource.metadata_hash(), + if checker == is_set { "non-zero" } else { "zero" } + );crates/sozo/ops/src/migrate/mod.rs (2)
114-168
: Enhance error message in multicall error handlingOhayo sensei! The implementation looks solid with good practices for metadata handling and UI feedback. However, the error message in the multicall error handling could be more descriptive.
Consider this enhancement:
- invoker.multicall().await.map_err(|e| anyhow!(e.to_string()))?; + invoker.multicall().await.map_err(|e| anyhow!("Failed to upload metadata via multicall: {}", e))?;
170-197
: Consider using early returns for cleaner flowOhayo sensei! The implementation is solid, but we can make it more idiomatic with early returns.
Consider this refactor:
async fn upload_metadata_from_resource_config( &self, service: &mut impl MetadataService, config: &[ResourceConfig], ) -> anyhow::Result<Vec<Call>> { let mut calls = vec![]; for item in config { let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag); let current_hash = self.diff.resources.get(&selector).map_or(Felt::ZERO, |r| r.metadata_hash()); let new_metadata = metadata_config::ResourceMetadata::from(item.clone()); - let res = new_metadata.upload_if_changed(service, current_hash).await?; - - if let Some((new_uri, new_hash)) = res { - calls.push(self.world.set_metadata_getcall(&ResourceMetadata { - resource_id: selector, - metadata_uri: ByteArray::from_string(&new_uri)?, - metadata_hash: new_hash, - })); - } + let Some((new_uri, new_hash)) = new_metadata.upload_if_changed(service, current_hash).await? else { + continue; + }; + + calls.push(self.world.set_metadata_getcall(&ResourceMetadata { + resource_id: selector, + metadata_uri: ByteArray::from_string(&new_uri)?, + metadata_hash: new_hash, + })); } Ok(calls) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (9)
bin/sozo/src/commands/migrate.rs
(3 hunks)crates/dojo/world/src/metadata/fake_metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/ipfs_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_storage.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- bin/sozo/src/commands/migrate.rs
- crates/dojo/world/src/metadata/fake_metadata_service.rs
- crates/dojo/world/src/metadata/metadata_service.rs
- crates/dojo/world/src/metadata/metadata_test.rs
- crates/dojo/world/src/metadata/mod.rs
🔇 Additional comments (5)
crates/dojo/world/src/metadata/ipfs_service.rs (2)
1-14
: Ohayo, sensei! Clean and well-documented struct definition
The imports are appropriate, and the struct documentation clearly explains its purpose. The minimal design with a single client field follows good practices.
23-47
: Ohayo, sensei! Robust implementation with proper validation
The new
method implementation is excellent with:
- Comprehensive input validation
- Clear error messages
- Well-documented parameters and return values
crates/sozo/ops/src/tests/migration.rs (2)
8-8
: Ohayo! The new imports look good, sensei!
The added imports for ResourceConfig
, FakeMetadataService
, and Felt
align perfectly with the PR's metadata handling objectives.
Also applies to: 11-11, 17-17
115-152
: Comprehensive metadata upload test implementation!
The test thoroughly verifies:
- Initial state with no metadata
- Successful metadata upload after migration
- Proper updates in the ResourceMetadata model
The clear separation between setup, action, and verification makes the test easy to understand.
crates/sozo/ops/src/migrate/mod.rs (1)
23-23
: LGTM! Clean import additions
The new imports are well-organized and properly scoped for the metadata functionality.
Also applies to: 27-28, 32-32
1da4708
to
e63a17c
Compare
e63a17c
to
9ffe3e7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (13)
scripts/rust_fmt.sh (1)
3-3
: Consider adding documentation for the script usageOhayo! It would be helpful to add a comment block explaining:
- The script's purpose
- Available options (especially the new --fix flag)
- Example usage
Here's a suggested addition:
#!/bin/bash +# Rust code formatting script +# Usage: +# ./rust_fmt.sh # Check formatting without making changes +# ./rust_fmt.sh --fix # Apply formatting fixes +# +# Additional cargo fmt arguments can be passed after the initial options + option="--check"examples/spawn-and-move/dojo_dev.toml (1)
6-17
: Ohayo! Consider enhancing model descriptions, sensei!While the model definitions are clear, they could benefit from more detailed descriptions including:
- Expected fields/properties
- Relationships with other models
- Usage examples
crates/dojo/world/src/config/environment.rs (2)
13-15
: Ohayo! Consider enhancing security for IPFS credentials, sensei!While the implementation follows the existing pattern, storing sensitive credentials (username/password) as plain text in a struct that derives Debug could potentially expose them in logs or error messages.
Consider:
- Creating a dedicated type for credentials that implements custom Debug
- Using a secure string type that zeros memory on drop
Here's a suggested approach:
use secrecy::SecretString; #[derive(Default, Deserialize, Clone)] pub struct IpfsCredentials { username: Option<String>, password: Option<SecretString>, } impl Debug for IpfsCredentials { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("IpfsCredentials") .field("username", &self.username) .field("password", &"[REDACTED]") .finish() } }
Line range hint
13-59
: Ohayo! Consider adding configuration validation, sensei!Since this is part of the metadata upload enhancement, it might be helpful to:
- Add a method to validate IPFS configuration completeness (all or none of the IPFS fields should be set)
- Document the IPFS configuration requirements in the struct documentation
Here's a suggested validation method:
impl Environment { pub fn validate_ipfs_config(&self) -> Result<(), &'static str> { match (self.ipfs_url(), self.ipfs_username(), self.ipfs_password()) { (None, None, None) => Ok(()), // No IPFS config is valid (Some(_), Some(_), Some(_)) => Ok(()), // Complete IPFS config is valid _ => Err("Incomplete IPFS configuration: all or none of the IPFS fields must be set") } } }crates/dojo/world/src/metadata/metadata_storage.rs (1)
91-116
: Reduce code duplication in trait implementationsOhayo, sensei! The
upload
implementations forWorldMetadata
andResourceMetadata
share significant code. Consider extracting the common logic into a helper function.+async fn upload_metadata<T: serde::Serialize>( + meta: &mut T, + icon_uri: &Option<Uri>, + service: &mut impl MetadataService +) -> Result<String> { + if let Some(new_uri) = upload_uri(icon_uri, service) + .await + .context("Failed to upload icon URI")? { + // Assuming there's a way to set icon_uri on T + meta.set_icon_uri(new_uri); + } + + let serialized = json!(meta).to_string(); + service.upload(serialized.as_bytes().to_vec()) + .await + .context("Failed to upload metadata") +} impl MetadataStorage for WorldMetadata { async fn upload(&self, service: &mut impl MetadataService) -> Result<String> { let mut meta = self.clone(); - meta.icon_uri = - upload_uri(&self.icon_uri, service).await.context("Failed to upload icon URI")?; meta.cover_uri = upload_uri(&self.cover_uri, service).await.context("Failed to upload cover URI")?; - let serialized = json!(meta).to_string(); - service.upload(serialized.as_bytes().to_vec()).await.context("Failed to upload metadata") + upload_metadata(&mut meta, &self.icon_uri, service).await } }bin/sozo/src/commands/migrate.rs (1)
84-105
: Enhance error handling and credential validationOhayo sensei! While the implementation works, we can make it more robust:
- Replace multiple unwrap() calls with safer alternatives
- Consolidate credential validation
- Provide more specific error messages
Consider this improvement:
- let ipfs_url = ipfs.url(profile_config.env.as_ref()); - let ipfs_username = ipfs.username(profile_config.env.as_ref()); - let ipfs_password = ipfs.password(profile_config.env.as_ref()); - - let mut metadata_upload_text = String::new(); - - if ipfs_url.is_some() && ipfs_username.is_some() && ipfs_password.is_some() { - let mut metadata_service = IpfsMetadataService::new( - &ipfs_url.unwrap(), - &ipfs_username.unwrap(), - &ipfs_password.unwrap(), - )?; + let mut metadata_upload_text = match ( + ipfs.url(profile_config.env.as_ref()), + ipfs.username(profile_config.env.as_ref()), + ipfs.password(profile_config.env.as_ref()), + ) { + (Some(url), Some(username), Some(password)) => { + let mut metadata_service = IpfsMetadataService::new(&url, &username, &password) + .context("Failed to initialize IPFS metadata service")?; + + migration + .upload_metadata(&mut spinner, &mut metadata_service) + .await + .context("Failed to upload metadata to IPFS")?; + String::new() + } + _ => "\nMetadata: No IPFS credentials found => metadata upload ignored".to_string() + }; - migration - .upload_metadata(&mut spinner, &mut metadata_service) - .await - .context("Metadata upload failed.")?; - } else { - metadata_upload_text = "\nMetadata: No IPFS credentials has been found => \ - metadata upload has been ignored." - .to_string(); - };crates/sozo/ops/src/tests/migration.rs (1)
91-113
: Consider adding documentation for the helper function, sensei!The
check_resources
helper is well-structured with good assertions and error messages. However, it would benefit from documentation explaining:
- The purpose of the
checker
callback- The expected format of
resources
- The meaning of
expected_count
Consider adding a doc comment like this:
+/// Validates resource metadata in a WorldDiff against expected criteria +/// +/// # Arguments +/// * `diff` - The WorldDiff containing resources to validate +/// * `resources` - Optional vector of ResourceConfig to check +/// * `expected_count` - Expected number of resources +/// * `checker` - Callback to validate the metadata hash of each resource fn check_resources(bin/sozo/src/commands/options/ipfs.rs (3)
9-27
: Ohayo! Consider enhancing security for sensitive fields.While the implementation looks solid, sensei, we should consider adding a security note in the help text for the password field to warn users about command-line visibility.
#[arg(long, env = IPFS_PASSWORD_ENV_VAR)] #[arg(value_name = "PASSWORD")] - #[arg(help = "The IPFS password.")] + #[arg(help = "The IPFS password. Note: Prefer using environment variables for passwords to avoid shell history exposure.")] #[arg(global = true)] pub ipfs_password: Option<String>,
29-74
: Consider refactoring to reduce duplication and enhance security.The implementation looks good but has two areas for improvement:
- The three methods follow the same pattern and could be refactored to reduce duplication
- The password tracing at line 64 might expose sensitive information in logs
Consider implementing a generic method to handle the common pattern:
impl IpfsOptions { fn get_value<T: Clone>( &self, cmd_value: &Option<T>, env_value: Option<&Environment> -> Option<String>, field_name: &str, ) -> Option<String> { trace!("Retrieving {} for IpfsOptions.", field_name); if let Some(value) = cmd_value { trace!("Using IPFS {} from command line.", field_name); Some(value.to_string()) } else if let Some(value) = env_value { trace!("Using IPFS {} from environment metadata.", field_name); Some(value) } else { trace!("No default IPFS {}.", field_name); None } } }Also, consider masking the password in trace logs:
- trace!(?password, "Using IPFS password from command line."); + trace!("Using IPFS password from command line. [REDACTED]");
76-170
: Excellent test coverage, sensei! Consider adding edge cases.The test suite is comprehensive and well-structured. To make it even more robust, consider adding tests for:
- Invalid URL formats
- Empty string values
- Unicode characters in username/password
Example test case:
#[test] fn handles_invalid_url_format() { let cmd = Command::parse_from([ "sozo", "--ipfs-url", "not_a_valid_url", ]); assert!(cmd.options.url(None).is_none()); }crates/sozo/ops/src/migrate/mod.rs (2)
114-168
: Consider adding progress tracking and retry mechanism for robustnessOhayo sensei! The metadata upload implementation looks solid, but could benefit from these enhancements:
- Progress tracking for large uploads (e.g., number of successful/failed uploads)
- Retry mechanism for failed uploads to handle temporary network issues
pub async fn upload_metadata( &self, ui: &mut MigrationUi, service: &mut impl MetadataService, ) -> anyhow::Result<()> { ui.update_text("Uploading metadata..."); + let mut success_count = 0; + let mut retry_count = 0; + const MAX_RETRIES: u32 = 3; let mut invoker = Invoker::new(&self.world.account, self.txn_config); // world let current_hash = self.diff.resources.get(&Felt::ZERO).map_or(Felt::ZERO, |r| r.metadata_hash()); let new_metadata = WorldMetadata::from(self.diff.profile_config.world.clone()); - let res = new_metadata.upload_if_changed(service, current_hash).await?; + let res = retry_with_backoff(MAX_RETRIES, || { + retry_count += 1; + new_metadata.upload_if_changed(service, current_hash) + }).await?; if let Some((new_uri, new_hash)) = res { invoker.add_call(self.world.set_metadata_getcall(&ResourceMetadata { @@ -133,6 +137,7 @@ metadata_uri: ByteArray::from_string(&new_uri)?, metadata_hash: new_hash, })); + success_count += 1; }
170-197
: Consider batch processing optimization for resource configsOhayo sensei! The current implementation processes each resource config sequentially. For large configurations, consider implementing batch processing to improve performance.
async fn upload_metadata_from_resource_config( &self, service: &mut impl MetadataService, config: &[ResourceConfig], ) -> anyhow::Result<Vec<Call>> { let mut calls = vec![]; + const BATCH_SIZE: usize = 50; - for item in config { + for batch in config.chunks(BATCH_SIZE) { + let futures: Vec<_> = batch.iter().map(|item| async { + let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag) + .map_err(|e| anyhow!("Failed to compute selector for tag {}: {}", item.tag, e))?; - let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag); + let current_hash = + self.diff.resources.get(&selector).map_or(Felt::ZERO, |r| r.metadata_hash()); - let current_hash = - self.diff.resources.get(&selector).map_or(Felt::ZERO, |r| r.metadata_hash()); + let new_metadata = metadata_config::ResourceMetadata::from(item.clone()); + let res = new_metadata.upload_if_changed(service, current_hash).await?; - let new_metadata = metadata_config::ResourceMetadata::from(item.clone()); - - let res = new_metadata.upload_if_changed(service, current_hash).await?; - - if let Some((new_uri, new_hash)) = res { - calls.push(self.world.set_metadata_getcall(&ResourceMetadata { + Ok::<Option<Call>, anyhow::Error>(if let Some((new_uri, new_hash)) = res { + Some(self.world.set_metadata_getcall(&ResourceMetadata { resource_id: selector, metadata_uri: ByteArray::from_string(&new_uri)?, metadata_hash: new_hash, - })); + })) + } else { + None + }) + }).collect(); + + let results = futures::future::join_all(futures).await; + for result in results { + if let Some(call) = result? { + calls.push(call); + } } } Ok(calls) }examples/spawn-and-move/manifest_dev.json (1)
95-98
: Ohayo! Excellent addition of metadata hash tracking.The new
metadata_hash
field in ResourceMetadata and correspondinghash
field in MetadataUpdate event directly support the PR's objective of improved metadata handling. This will enable detecting when metadata content has actually changed, ensuring uploads happen only when necessary.This change provides a robust foundation for:
- Detecting actual changes in metadata content
- Preventing unnecessary uploads
- Tracking metadata versions
Also applies to: 1009-1013
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (2)
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (17)
bin/sozo/src/commands/dev.rs
(2 hunks)bin/sozo/src/commands/migrate.rs
(6 hunks)bin/sozo/src/commands/options/ipfs.rs
(1 hunks)bin/sozo/src/commands/options/mod.rs
(1 hunks)crates/dojo/utils/src/env.rs
(1 hunks)crates/dojo/world/src/config/environment.rs
(2 hunks)crates/dojo/world/src/metadata/fake_metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/ipfs_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_service.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_storage.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)examples/spawn-and-move/dojo_dev.toml
(2 hunks)examples/spawn-and-move/manifest_dev.json
(8 hunks)scripts/rust_fmt.sh
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- crates/dojo/world/src/metadata/fake_metadata_service.rs
- crates/dojo/world/src/metadata/ipfs_service.rs
- crates/dojo/world/src/metadata/metadata_service.rs
- crates/dojo/world/src/metadata/metadata_test.rs
- crates/dojo/world/src/metadata/mod.rs
🔇 Additional comments (20)
bin/sozo/src/commands/options/mod.rs (1)
2-2
: Ohayo! The IPFS module declaration looks good, sensei!
The addition of the IPFS module aligns well with the PR objectives for improving metadata handling. The module is correctly declared and properly placed in alphabetical order.
Let's verify the IPFS module implementation:
✅ Verification successful
Ohayo! The IPFS module implementation looks solid, sensei!
The IPFS module has been properly implemented with:
- A well-structured
IpfsOptions
struct with URL, username, and password fields - Comprehensive environment variable support via
clap::Args
- Clear helper methods for retrieving configuration from both CLI and environment
- Thorough test coverage including all configuration scenarios
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the IPFS module structure and integration
# Check for the IPFS module file and its contents
fd "ipfs.rs" --type f --exec cat {} \;
# Look for any IPFS-related structs and their usage
ast-grep --pattern 'struct $name {
$$$
}'
Length of output: 6289
scripts/rust_fmt.sh (1)
4-11
: Ohayo! Clean and effective implementation, sensei!
The changes introduce a flexible way to toggle between check and fix modes while maintaining backward compatibility. The implementation follows shell scripting best practices with proper argument handling.
crates/dojo/utils/src/env.rs (1)
7-9
: Ohayo! The new IPFS environment variables look well-structured, sensei!
The naming convention is consistent with existing patterns, and these additions align perfectly with the metadata upload improvements.
examples/spawn-and-move/dojo_dev.toml (2)
18-25
: LGTM! Event and contract definitions are well-structured.
The event and contract definitions align well with the models and follow consistent naming conventions.
34-34
: Verify world address update impact
The world address has been updated. Please ensure this change is intentional and won't break existing deployments.
✅ Verification successful
World address change is isolated to spawn-and-move example
Ohayo sensei! The world address change in examples/spawn-and-move/dojo_dev.toml
is safe. This address is only referenced in the spawn-and-move example's configuration files (dojo_dev.toml
and manifest_dev.json
). Each example and component in the codebase maintains its own unique world address, so this change won't affect other deployments.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any other files referencing the old world address
rg "0x70058e3886cb7411e8a77db90ee3dd453ac16b763b30bd99b3c8440fe42056e"
Length of output: 70
Script:
#!/bin/bash
# Let's search for any references to world addresses in the codebase
rg -i "world_address.*=.*0x[a-fA-F0-9]+" --type toml
# Also search for any references to the new world address
rg "0x52ee4d3cba58d1a0462bbfb6813bf5aa1b35078c3b859cded2b727c1d9469ea"
# Check if there are any other TOML config files
fd -e toml
Length of output: 3984
crates/dojo/world/src/config/environment.rs (1)
49-59
: LGTM! Clean and consistent implementation, sensei!
The new accessor methods follow the established pattern perfectly, using as_deref()
for safe string slice conversion. The implementation is clean and maintains consistency with the existing codebase.
bin/sozo/src/commands/migrate.rs (3)
6-6
: LGTM! Clean import additions
Ohayo! The new imports for IPFS functionality are well-organized and properly scoped.
Also applies to: 18-18
37-39
: LGTM! Clean struct extension
The IpfsOptions integration is properly flattened into the command structure.
113-127
: LGTM! Clear and informative output messages
The status messages clearly indicate both the migration result and metadata upload status.
crates/sozo/ops/src/tests/migration.rs (4)
8-8
: Ohayo! The new imports look good, sensei!
The added imports for ResourceConfig
, FakeMetadataService
, and Felt
are well-organized and necessary for the new metadata functionality.
Also applies to: 11-11, 17-17
43-43
: Clean implementation of optional metadata upload, sensei!
The addition of the with_metadata
parameter and conditional metadata upload block aligns perfectly with the PR objectives. The error handling is descriptive and the separation of concerns is well-maintained.
Also applies to: 64-71
77-77
: Test modifications maintain backward compatibility!
The updates to migrate_from_local
and migrate_no_change
properly handle the new with_metadata
parameter while preserving their original test objectives.
Also applies to: 86-86
115-152
: Excellent test coverage for metadata upload functionality!
The test is well-structured with:
- Clear verification of initial state (no metadata)
- Proper execution of migration with metadata
- Comprehensive validation of metadata updates
- Good comments explaining the test flow and scope
The note about IPFS upload testing being handled elsewhere shows good awareness of test boundaries.
bin/sozo/src/commands/dev.rs (2)
19-19
: Ohayo! LGTM - Import statement is well-placed.
The import aligns with the PR's objective of enhancing metadata handling capabilities.
91-91
: Verify the intentional disabling of IPFS metadata upload in dev mode
Ohayo sensei! I notice we're setting IpfsOptions::default()
with a comment indicating IPFS metadata upload isn't needed here. However, given the PR's objective to "upload world/models/contracts metadata only if changed", should we consider enabling IPFS uploads in dev mode to catch metadata-related issues early?
Let's check how metadata uploads are handled in other modes:
✅ Verification successful
Let's analyze how MigrateArgs is structured and how metadata/IPFS handling is implemented across the codebase to better understand if this default behavior is correct.
Disabling IPFS metadata upload in dev mode is intentional and correct
Ohayo sensei! After analyzing the codebase, I can confirm that disabling IPFS metadata upload in dev mode is the correct behavior because:
- The dev mode is a special mode that runs migrations for local development and testing
- The metadata upload is properly handled in the regular migrate command, where:
- It checks for IPFS credentials (url, username, password)
- Only uploads if credentials are available
- Provides feedback about skipped uploads when credentials are missing
The comment "no need for IPFS metadata upload here" is accurate and aligns with the development workflow where metadata uploads are unnecessary for local testing.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how IPFS options are handled in other commands
# Expected: Find differences in IPFS handling between dev and other modes
# Search for IpfsOptions usage patterns
rg -A 5 "IpfsOptions::" bin/sozo/src/commands/
# Search for metadata upload logic
ast-grep --pattern 'upload_metadata($$$)'
Length of output: 426
Script:
#!/bin/bash
# Let's check the MigrateArgs struct definition and its usage
ast-grep --pattern 'struct MigrateArgs {
$$$
}'
# Check for any IPFS-related configuration or handling
rg -B 2 -A 5 "ipfs" bin/sozo/src/
# Look for metadata upload implementations
rg -B 2 -A 5 "metadata" bin/sozo/src/
Length of output: 58881
bin/sozo/src/commands/options/ipfs.rs (1)
1-170
: Verify integration with metadata upload functionality.
The implementation looks solid for handling IPFS configuration. Let's verify how it integrates with the metadata upload functionality mentioned in the PR objectives.
✅ Verification successful
Ohayo sensei! After reviewing the implementation, I can now provide a definitive response:
IPFS integration is properly implemented ✨
The code in migrate.rs
shows that the IPFS options are correctly used for metadata upload:
- IPFS credentials are properly checked before attempting upload
- The
IpfsMetadataService
is initialized with the provided credentials - Graceful fallback with informative message when IPFS credentials are missing
- The implementation follows the command pattern and integrates well with the migration workflow
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for metadata upload implementation that uses these IPFS options
rg -l "IpfsOptions" | xargs rg "metadata.*upload"
# Look for any migration-related code that uses these options
ast-grep --pattern 'fn $_($$_: &IpfsOptions, $$$) {
$$$
}'
Length of output: 800
Script:
#!/bin/bash
# Let's check the full implementation of migrate command to understand the IPFS integration
rg -A 20 "pub fn migrate" bin/sozo/src/commands/migrate.rs
# Also check if there are any IPFS-related utility functions
rg -l "ipfs" | xargs rg "fn.*ipfs"
Length of output: 713
Script:
#!/bin/bash
# Let's check the full migrate command implementation to see how IPFS options are used
cat bin/sozo/src/commands/migrate.rs
# Also check the metadata service implementation
rg -l "MetadataService" | xargs rg -A 10 "struct MetadataService"
Length of output: 5365
crates/sozo/ops/src/migrate/mod.rs (1)
178-179
: Add error handling for selector computation
This issue was previously identified. The selector computation needs proper error handling to prevent silent failures.
examples/spawn-and-move/manifest_dev.json (3)
3-4
: LGTM! World configuration updates look valid.
The new class hash and address values are properly formatted and maintain the expected structure.
1255-1256
: Contract deployment updates look valid, sensei!
The updated contract addresses and class hashes maintain proper formatting and structure.
Also applies to: 1596-1596, 1790-1790, 1966-1966
2190-2190
: Event configuration updates are properly structured.
The new class hashes for the events maintain the expected format.
Also applies to: 2196-2196
9ffe3e7
to
bb6067b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
bin/sozo/tests/test_data/policies.json (1)
47-52
: Essential metadata methods align with PR objectives!The
uuid
andset_metadata
methods are crucial for the PR's goal of uploading world metadata only when changed. These methods will help track and update metadata states appropriately.Consider implementing a metadata versioning mechanism to track changes effectively.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
bin/sozo/tests/test_data/policies.json
(1 hunks)scripts/rust_fmt.sh
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- scripts/rust_fmt.sh
🔇 Additional comments (6)
bin/sozo/tests/test_data/policies.json (6)
3-12
: Ohayo! Core system upgrade policies look good, sensei!
The upgrade methods for core system components are properly configured with their respective target addresses.
15-44
: Player-related operations are well-structured!
All player-related operations (spawn, move, config management, etc.) are consistently mapped to the same target address, following good practice for cohesive functionality grouping.
55-84
: Registration and upgrade operations are properly segregated!
The policy configuration clearly separates registration operations (namespace, event, model, contract) from their corresponding upgrade operations, providing good operational clarity.
87-108
: Entity management operations follow consistent patterns!
The entity-related operations (emit, set, delete) are implemented in both singular and plural forms, providing flexible granularity for entity management.
111-124
: Access control policies are well-defined!
The owner and writer permission management methods are properly configured, ensuring secure access control for metadata operations.
✅ Verification successful
The initial search revealed the grant/revoke functions, but we need to verify the permission checks and error handling to ensure they're properly implemented. This script will help us understand the complete access control mechanism.
Ohayo sensei! After analyzing the codebase, I can confidently verify the access control implementation.
Access control implementation is robust and secure! ✨
The implementation includes:
- Proper permission checks through
assert_caller_permissions
- Clear separation between Owner and Writer permissions
- Comprehensive test coverage for all access control scenarios
- Protection against malicious contracts and unauthorized access
- Resource-based permission model with proper validation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify access control implementation
# Search for access control checks in the codebase
echo "Checking access control implementations..."
rg -A 5 "fn (grant|revoke)_(owner|writer)"
Length of output: 8897
Script:
#!/bin/bash
# Let's check for permission checks and error handling
# Check for permission-related error handling
echo "Checking permission-related error handling..."
rg -A 5 "assert_caller_permissions"
# Check for permission enums/types
echo "Checking permission types..."
rg -A 5 "enum.*Permission"
# Check for permission checks in tests
echo "Checking permission-related tests..."
rg -A 10 "test.*permission|test.*(owner|writer)"
Length of output: 52818
47-124
: Verify the comprehensive world metadata operations, sensei!
The new target address (0x52ee...) consolidates all world metadata and entity management operations, which aligns well with the PR's objective of improving metadata handling. However, we should verify that all these methods are actually implemented in the contract.
✅ Verification successful
Ohayo! All world metadata operations are properly implemented, sensei!
The verification shows that all methods listed in the policies.json are fully implemented in the world contract. Specifically:
- Core metadata methods:
uuid
,set_metadata
,register_namespace
,register_event
,register_model
,register_contract
,init_contract
- Upgrade operations:
upgrade_event
,upgrade_model
,upgrade_contract
- Entity management:
set_entity
,set_entities
,delete_entity
,delete_entities
- Event handling:
emit_event
,emit_events
- Access control:
grant_owner
,revoke_owner
,grant_writer
,revoke_writer
All implementations are found in crates/dojo/core/src/world/world_contract.cairo
and properly exposed through the interface in crates/dojo/core/src/world/iworld.cairo
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all methods exist in the contract at the specified target address
# Get all methods from the target contract
echo "Checking methods for target: 0x52ee4d3cba58d1a0462bbfb6813bf5aa1b35078c3b859cded2b727c1d9469ea"
# Search for method declarations in the codebase
methods=(
"uuid"
"set_metadata"
"register_namespace"
"register_event"
"register_model"
"register_contract"
"init_contract"
"upgrade_event"
"upgrade_model"
"upgrade_contract"
"emit_event"
"emit_events"
"set_entity"
"set_entities"
"delete_entity"
"delete_entities"
"grant_owner"
"revoke_owner"
"grant_writer"
"revoke_writer"
)
for method in "${methods[@]}"; do
echo "Searching for method: $method"
ast-grep --pattern "fn $method"
done
Length of output: 8897
Script:
#!/bin/bash
# Let's check for the remaining methods in the world contract
echo "Ohayo! Checking world contract implementations..."
# Search for metadata and registration methods
rg -A 5 'fn (uuid|set_metadata|register_(namespace|event|model|contract)|init_contract)'
# Search for upgrade methods
echo -e "\nChecking upgrade methods..."
rg -A 5 'fn upgrade_(event|model|contract)'
# Search for entity and event methods
echo -e "\nChecking entity and event methods..."
rg -A 5 'fn (emit_event|emit_events|set_entity|set_entities|delete_entity|delete_entities)'
Length of output: 41160
Update the Metadata page according to dojoengine/dojo#2691.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking very good @remybar!
Some comments from the first pass of the review, I may drop more comments tomorrow after the functional review. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (23)
crates/dojo/world/src/config/mod.rs (1)
3-3
: Consider enhancing IPFS integration resilience, sensei!Since this configuration will be used for metadata uploads, consider implementing:
- Retry mechanisms for failed IPFS uploads
- Local caching of metadata to prevent redundant uploads
- Circuit breaker pattern for IPFS service failures
Would you like me to provide example implementations for any of these patterns?
Also applies to: 12-12
crates/dojo/world/src/lib.rs (1)
12-12
: The services module addition perfectly aligns with our objectives, sensei!The new services module is a crucial addition that supports the PR's goal of improving metadata upload functionality. This modular approach will help manage IPFS-related operations more effectively, especially for handling world metadata uploads after failed deployments.
Consider documenting the service interfaces in the module-level documentation to help other developers understand the available functionality and integration points.
crates/dojo/world/src/config/ipfs_config.rs (1)
1-22
: Consider these architectural improvements
- Environment Variables: Consider supporting configuration through environment variables for secure deployment practices.
- Secrets Management: Consider integrating with a secrets management solution for production deployments.
- Testing: Add unit tests covering validation edge cases and integration tests with mock IPFS endpoints.
Would you like me to help with:
- Adding environment variable support using
envy
crate?- Creating a comprehensive test suite?
- Adding documentation about secure configuration practices?
crates/dojo/world/src/services/upload_service.rs (2)
1-6
: Ohayo! The trait design looks solid, sensei! A few suggestions for consideration.The trait is well-designed with proper marker traits (Send + Sync + Unpin) which are essential for async operations. The use of
#[allow(async_fn_in_trait)]
is necessary as async functions in traits are still unstable in Rust.Consider using a custom error type instead of
anyhow::Result
for more specific error handling:-use anyhow::Result; +use thiserror::Error; + +#[derive(Error, Debug)] +pub enum UploadError { + #[error("Upload failed: {0}")] + UploadFailed(String), + #[error("Invalid URI: {0}")] + InvalidUri(String), +} + +pub type Result<T> = std::result::Result<T, UploadError>;
7-15
: The upload method signature might be too restrictive, sensei.The method requires
&mut self
which could limit concurrent uploads. Consider whether mutability is truly needed for your implementations.-async fn upload(&mut self, data: Vec<u8>) -> Result<String>; +async fn upload(&self, data: Vec<u8>) -> Result<String>;crates/dojo/world/src/services/mock_upload_service.rs (2)
8-14
: Consider adding a usage example in the documentation, sensei!While the documentation clearly explains the purpose, a code example would make it even more helpful for other developers.
Here's a suggested addition to the documentation:
/// Mock implementation of UploadService to be used for tests only. /// It just stores uri and data in a HashMap when `upload` is called, /// and returns these data when `get` is called. +/// +/// # Example +/// ``` +/// # use your_crate::MockUploadService; +/// # use your_crate::UploadService; +/// # async fn example() -> anyhow::Result<()> { +/// let mut upload_service = MockUploadService::default(); +/// let data = b"Hello, World!".to_vec(); +/// let uri = upload_service.upload(data.clone()).await?; +/// let retrieved = upload_service.get(uri).await?; +/// assert_eq!(data, retrieved); +/// # Ok(()) +/// # } +/// ```
18-27
: Consider using a more robust hashing mechanism, sensei!
DefaultHasher
is not cryptographically secure and could potentially lead to collisions. For a more robust implementation, consider using SHA-256 or another cryptographic hash function.Here's a suggested improvement:
+use sha2::{Sha256, Digest}; + async fn upload(&mut self, data: Vec<u8>) -> Result<String> { - let mut hasher = DefaultHasher::new(); - data.hash(&mut hasher); - let hash = hasher.finish(); + let mut hasher = Sha256::new(); + hasher.update(&data); + let hash = hasher.finalize(); - let uri = format!("ipfs://{:x}", hash); + let uri = format!("ipfs://{:x}", hash); self.data.insert(uri.clone(), data);crates/dojo/world/src/config/environment.rs (1)
15-15
: Ohayo! Consider adding a getter method for consistencyWhile the
ipfs_config
field addition looks good, other fields in this struct have corresponding getter methods. For consistency with the existing pattern, consider adding a getter method.impl Environment { + pub fn ipfs_config(&self) -> Option<&IpfsConfig> { + self.ipfs_config.as_ref() + } // existing methods... }bin/sozo/src/commands/options/ipfs.rs (2)
12-13
: Enhance help messages with more detailsThe help messages could be more descriptive to guide users better:
- IPFS URL: Add expected format and default port
- Username: Mention if this is the IPFS service credentials
- Password: Add security recommendations
- #[arg(help = "The IPFS URL.")] + #[arg(help = "The IPFS service URL (e.g., http://localhost:5001)")] - #[arg(help = "The IPFS username.")] + #[arg(help = "Username for authenticating with the IPFS service")] - #[arg(help = "The IPFS password.")] + #[arg(help = "Password for IPFS service (prefer using IPFS_PASSWORD_ENV_VAR environment variable)")]Also applies to: 18-19, 24-25
30-42
: Improve error handling and reduce cloningOhayo sensei! A few suggestions to enhance this implementation:
- Add detailed error messages about missing fields
- Avoid unnecessary cloning of Option
pub fn config(&self) -> Option<IpfsConfig> { trace!("Retrieving IPFS config for IpfsOptions."); - let url = self.ipfs_url.as_ref().map(|url| url.to_string()); - let username = self.ipfs_username.clone(); - let password = self.ipfs_password.clone(); + // Early returns with helpful messages + let url = self.ipfs_url.as_ref().map(|url| url.to_string()).ok_or_else(|| { + trace!("Missing IPFS URL"); + () + }).ok()?; + + let username = self.ipfs_username.as_ref().map(String::as_str).ok_or_else(|| { + trace!("Missing IPFS username"); + () + }).ok()?; + + let password = self.ipfs_password.as_ref().map(String::as_str).ok_or_else(|| { + trace!("Missing IPFS password"); + () + }).ok()?; - if let (Some(url), Some(username), Some(password)) = (url, username, password) { - Some(IpfsConfig { url, username, password }) - } else { - None - } + Some(IpfsConfig { + url, + username: username.to_string(), + password: password.to_string(), + }) }crates/dojo/world/src/metadata/metadata_storage.rs (4)
39-47
: Consider using custom error types for better error handlingOhayo, sensei! The current error handling could be improved by using custom error types instead of using the generic
Result
type. This would make it easier to handle specific error cases and provide better error messages.#[derive(Debug, thiserror::Error)] pub enum MetadataError { #[error("Failed to read file: {0}")] FileReadError(#[from] std::io::Error), #[error("Failed to upload to IPFS: {0}")] UploadError(String), } async fn upload_uri(uri: &Option<Uri>, service: &mut impl UploadService) -> Result<Option<Uri>, MetadataError> { if let Some(Uri::File(path)) = uri { let data = std::fs::read(path).map_err(MetadataError::FileReadError)?; let uploaded_uri = Uri::Ipfs(service.upload(data).await.map_err(|e| MetadataError::UploadError(e.to_string()))?); Ok(Some(uploaded_uri)) } else { Ok(uri.clone()) } }
49-51
: Enhance trait documentation with examples and implementation guidelinesOhayo, sensei! The trait documentation could be more comprehensive by including:
- Usage examples
- Implementation guidelines
- Explanation of the hashing mechanism
- Best practices for implementors
/// Trait to be implemented by metadata structs that need to be /// uploadable to a storage system (e.g., IPFS). /// /// # Examples /// /// ```rust /// # use your_crate::{MetadataStorage, UploadService}; /// struct MyMetadata { /// name: String, /// description: Option<String>, /// } /// /// #[async_trait] /// impl MetadataStorage for MyMetadata { /// async fn upload(&self, service: &mut impl UploadService) -> Result<String> { /// // Implementation details... /// } /// } /// ``` /// /// # Implementation Guidelines /// /// 1. Ensure your type implements `std::hash::Hash` for change detection /// 2. Handle file uploads appropriately in the `upload` method /// 3. Use proper error handling with specific error types /// 4. Consider caching mechanisms for expensive operations
90-103
: Reduce code duplication in trait implementationsOhayo, sensei! The
upload
implementations forWorldMetadata
andResourceMetadata
share similar logic. Consider extracting the common functionality into a helper function.async fn upload_metadata<T: serde::Serialize>( meta: &mut T, icon_uri: &Option<Uri>, service: &mut impl UploadService, ) -> Result<String> { if let Some(icon) = icon_uri { meta.set_icon_uri(upload_uri(icon, service) .await .context("Failed to upload icon URI")?); } let serialized = json!(meta).to_string(); service.upload(serialized.as_bytes().to_vec()) .await .context("Failed to upload metadata") }Then use it in both implementations:
impl MetadataStorage for WorldMetadata { async fn upload(&self, service: &mut impl UploadService) -> Result<String> { let mut meta = self.clone(); upload_metadata(&mut meta, &self.icon_uri, service).await } }Also applies to: 105-116
90-116
: Add unit tests for metadata storage implementationsOhayo, sensei! The implementations would benefit from comprehensive unit tests to verify:
- Successful metadata uploads
- Hash comparison logic
- Error handling
- URI processing
Would you like me to help generate a test suite for these implementations?
bin/sozo/src/commands/migrate.rs (1)
88-99
: Consider enhancing error handling for metadata uploadOhayo sensei! While the metadata upload implementation looks good, we might want to make it more resilient by providing more detailed error information to users.
Consider this enhancement:
if let Some(config) = ipfs_config { let mut metadata_service = IpfsService::new(config)?; - migration - .upload_metadata(&mut spinner, &mut metadata_service) - .await - .context("Metadata upload failed.")?; + match migration + .upload_metadata(&mut spinner, &mut metadata_service) + .await + { + Ok(_) => { + metadata_upload_text = "\nMetadata: Successfully uploaded to IPFS".to_string(); + } + Err(e) => { + metadata_upload_text = format!("\nMetadata: Upload failed - {}", e); + // Log the full error for debugging + tracing::error!("Metadata upload error: {:?}", e); + } + } } else { metadata_upload_text = "\nMetadata: No IPFS credentials has been found => \ metadata upload has been ignored." .to_string(); };bin/sozo/src/commands/auth.rs (1)
Line range hint
1-803
: Architecture suggestion for future improvementsWhile the current changes improve the code by standardizing world-level permissions, consider these future enhancements:
- Consider adding a dedicated type for resource tags to encapsulate the special handling of "world".
- Consider adding validation for permission combinations that might not make sense for the world resource.
crates/sozo/ops/src/migrate/mod.rs (2)
116-170
: Consider extracting world metadata handling into a separate methodOhayo sensei! The world metadata handling logic (lines 125-138) could be extracted into a private method for better organization and reusability.
Consider this refactor:
+ async fn upload_world_metadata( + &self, + service: &mut impl UploadService, + invoker: &mut Invoker<&A>, + ) -> anyhow::Result<()> { + let current_hash = + self.diff.resources.get(&WORLD).map_or(Felt::ZERO, |r| r.metadata_hash()); + let new_metadata = WorldMetadata::from(self.diff.profile_config.world.clone()); + + let res = new_metadata.upload_if_changed(service, current_hash).await?; + + if let Some((new_uri, new_hash)) = res { + invoker.add_call(self.world.set_metadata_getcall(&ResourceMetadata { + resource_id: Felt::ZERO, + metadata_uri: ByteArray::from_string(&new_uri)?, + metadata_hash: new_hash, + })); + } + Ok(()) + }
158-167
: Enhance error handling specificityThe error handling could be more specific to help with debugging.
Consider this enhancement:
if self.do_multicall() { ui.update_text_boxed(format!("Uploading {} metadata...", invoker.calls.len())); - invoker.multicall().await.map_err(|e| anyhow!(e.to_string()))?; + invoker.multicall().await.map_err(|e| anyhow!("Failed to execute multicall for metadata upload: {}", e))?; } else { ui.update_text_boxed(format!( "Uploading {} metadata (sequentially)...", invoker.calls.len() )); - invoker.invoke_all_sequentially().await.map_err(|e| anyhow!(e.to_string()))?; + invoker.invoke_all_sequentially().await.map_err(|e| anyhow!("Failed to execute sequential calls for metadata upload: {}", e))?; }crates/dojo/world/src/services/ipfs_service.rs (5)
1-1
: Remove unnecessary import offmt::write
.Ohayo, sensei! The
write!
macro used in yourDebug
implementation doesn't require importingstd::fmt::write
. You can clean up the code by removing this import.Apply this diff to remove the unnecessary import:
-use std::{fmt::write, io::Cursor}; +use std::{io::Cursor};
25-25
: Correct typos in documentation comments.Ohayo, sensei! There are a few minor typos in your comments:
- In line 25, "Instanciate" should be "Instantiate".
- In lines 31-32, "instanciated" should be "instantiated", and "or a Anyhow error" should be "or an Anyhow error".
Apply this diff to correct the typos:
/// Instantiate a new IPFS service with IPFS configuration. /// /// # Arguments /// * `config` - The IPFS configuration /// /// # Returns -/// A new `IpfsService` is the IPFS client has been successfully -/// instanciated or a Anyhow error if not. +/// Returns a new `IpfsService` if the IPFS client has been successfully +/// instantiated, or an Anyhow error if not.Also applies to: 31-32
18-22
: SimplifyDebug
implementation using#[derive(Debug)]
.Ohayo, sensei! Since
IpfsService
contains only one field, you can simplify your code by deriving theDebug
trait instead of implementing it manually.Apply this diff to use
#[derive(Debug)]
:+#[derive(Debug)] pub struct IpfsService { client: ipfs_api_backend_hyper::IpfsClient, } -// impl required by clippy -impl std::fmt::Debug for IpfsService { - fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { - write!(f, "IPFS service")?; - Ok(()) - } -}
45-53
: Check error handling for IPFS upload failures.Ohayo, sensei! The error handling in the
upload
method is good. However, consider including more context in the error message to aid in debugging, such as the data size or a hint to check network connectivity.Optionally, you can enhance the error message like this:
.await -.map_err(|e| anyhow::anyhow!("Failed to upload to IPFS: {}", e))?; +.map_err(|e| anyhow::anyhow!("Failed to upload {} bytes to IPFS: {}. Please check your network connection and IPFS daemon.", data.len(), e))?;
55-64
: Ensureget
method is secure and handles errors appropriately.Ohayo, sensei! In the test-only
get
method, consider handling cases where the URI might not be properly formatted or the content retrieval fails, to make tests more robust.You might add validation for the URI and improve error messages:
async fn get(&self, uri: String) -> Result<Vec<u8>> { + if !uri.starts_with("ipfs://") { + return Err(anyhow::anyhow!("Invalid IPFS URI: {}", uri)); + } let res = self .client .cat(&uri.replace("ipfs://", "")) .map_ok(|chunk| chunk.to_vec()) .try_concat() .await - ?; + .map_err(|e| anyhow::anyhow!("Failed to retrieve from IPFS: {}", e))?; Ok(res) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (5)
crates/torii/types-test/Scarb.lock
is excluded by!**/*.lock
examples/simple/Scarb.lock
is excluded by!**/*.lock
examples/spawn-and-move/Scarb.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (20)
bin/sozo/src/commands/auth.rs
(5 hunks)bin/sozo/src/commands/dev.rs
(2 hunks)bin/sozo/src/commands/migrate.rs
(6 hunks)bin/sozo/src/commands/options/ipfs.rs
(1 hunks)crates/dojo/world/src/config/environment.rs
(2 hunks)crates/dojo/world/src/config/ipfs_config.rs
(1 hunks)crates/dojo/world/src/config/mod.rs
(1 hunks)crates/dojo/world/src/config/profile_config.rs
(6 hunks)crates/dojo/world/src/constants.rs
(1 hunks)crates/dojo/world/src/lib.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_storage.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/dojo/world/src/remote/events_to_remote.rs
(5 hunks)crates/dojo/world/src/services/ipfs_service.rs
(1 hunks)crates/dojo/world/src/services/mock_upload_service.rs
(1 hunks)crates/dojo/world/src/services/mod.rs
(1 hunks)crates/dojo/world/src/services/upload_service.rs
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)
✅ Files skipped from review due to trivial changes (2)
- crates/dojo/world/src/constants.rs
- crates/dojo/world/src/services/mod.rs
🚧 Files skipped from review as they are similar to previous changes (6)
- bin/sozo/src/commands/dev.rs
- crates/dojo/world/src/config/profile_config.rs
- crates/dojo/world/src/metadata/metadata_test.rs
- crates/dojo/world/src/metadata/mod.rs
- crates/dojo/world/src/remote/events_to_remote.rs
- crates/sozo/ops/src/tests/migration.rs
🔇 Additional comments (17)
crates/dojo/world/src/config/mod.rs (1)
3-3
: Ohayo! Clean module structure for new configurations, sensei!
The addition of IPFS and resource configurations follows good modular design principles and aligns well with the metadata upload requirements.
Let's verify the module structure is complete:
Also applies to: 8-8, 12-12, 16-16
✅ Verification successful
Ohayo! Module structure is perfectly balanced, sensei!
The verification shows that both modules are properly implemented and well-integrated:
ipfs_config.rs
provides a robust configuration with validation for IPFS credentialsresource_config.rs
defines a clean structure for resource metadata- Both configs are actively used throughout the codebase in appropriate contexts (ipfs_service, profile_config, metadata_config)
- The module exports are properly consumed by dependent modules
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the module structure and ensure all necessary files exist
# Check if the new module files exist
fd --type f "ipfs_config.rs|resource_config.rs" crates/dojo/world/src/config/
# Verify no orphaned imports (modules declared but files missing)
for module in ipfs_config resource_config; do
if ! fd --type f "${module}.rs" crates/dojo/world/src/config/ >/dev/null; then
echo "Warning: ${module}.rs is missing but declared in mod.rs"
fi
done
# Check for proper module exports usage
rg --type rust "use (IpfsConfig|ResourceConfig)" crates/dojo
Length of output: 444
Script:
#!/bin/bash
# Let's check the content of these module files to verify their structure
echo "=== ipfs_config.rs ==="
cat crates/dojo/world/src/config/ipfs_config.rs
echo -e "\n=== resource_config.rs ==="
cat crates/dojo/world/src/config/resource_config.rs
# Let's also check if these configs are being used in the codebase
echo -e "\n=== Usage of configs ==="
rg -A 2 "IpfsConfig|ResourceConfig" crates/dojo/world/src/
Length of output: 4440
crates/dojo/world/src/lib.rs (1)
7-7
: Ohayo! New constants module looks good, but let's verify its contents.
The addition of a dedicated constants module follows good practices for maintainability.
Let's check what constants are being defined:
✅ Verification successful
Ohayo! The constants module looks well-organized and relevant, sensei!
The constants module contains a comprehensive set of well-structured constants that are essential for the world metadata handling, including:
- Database table names and column definitions
- Type names with proper namespacing to avoid conflicts
- Metadata-specific constants like
METADATA_TABLE
,METADATA_TYPE_NAME
, andMETADATA_NAMES
- Additional infrastructure constants for gas prices, addresses, and world configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the contents of the constants module
# Expected: Constants related to metadata handling
# Check the constants file content
fd -t f "constants.rs" -x cat {}
Length of output: 4135
crates/dojo/world/src/services/upload_service.rs (1)
17-26
: LGTM! Well-structured test utility method.
The get
method is properly cfg-gated for tests and well-documented. This will be useful for testing implementations of the trait.
crates/dojo/world/src/services/mock_upload_service.rs (1)
1-6
: Ohayo! Clean and well-organized imports, sensei!
The imports are minimal, specific, and well-structured, showing good practices in dependency management.
crates/dojo/world/src/config/environment.rs (1)
3-4
: LGTM! Clean import addition, sensei!
The import is well-placed and follows the existing module structure.
crates/dojo/world/src/metadata/metadata_storage.rs (1)
70-87
: Implementation aligns well with PR objectives
Ohayo, sensei! The upload_if_changed
method successfully addresses the core objectives of PR #2691:
- Implements change detection using metadata hashing
- Allows for metadata re-upload after failed deployments
- Optimizes uploads by only processing changed metadata
Let's verify the change detection logic:
bin/sozo/src/commands/migrate.rs (4)
6-6
: LGTM! Clean import additions
Ohayo! The new imports for IPFS functionality are well-organized and necessary for the implementation.
Also applies to: 18-18
37-39
: LGTM! Clean struct extension
The addition of IpfsOptions follows the established pattern of flattened command options.
51-51
: LGTM! Clean config handling
The IPFS configuration is properly retrieved with a good fallback chain from command line to profile config.
Also applies to: 84-85
107-121
: LGTM! Clean output formatting
The metadata upload status is nicely integrated into the existing output format, maintaining consistency with the codebase's style.
bin/sozo/src/commands/auth.rs (5)
10-10
: Ohayo! Nice improvement in constant usage!
The introduction of the WORLD
constant from dojo_world::constants
enhances code clarity and maintainability by replacing magic values.
309-314
: Consistent world resource handling in clone_permissions, sensei!
The replacement of hardcoded hex formatting with a conditional check for WORLD
improves readability and maintainability. The special case handling for the "World" string representation is more user-friendly.
421-428
: Clean world permissions retrieval implementation!
The use of WORLD
constant in permission retrieval makes the code more maintainable and consistent with the rest of the codebase.
792-792
: Test case properly validates WORLD constant usage!
The test case has been appropriately updated to use the WORLD
constant, ensuring the new behavior is properly tested.
681-681
: Elegant selector resolution in PermissionPair!
The special case handling for "world" resource tag using the WORLD
constant improves code clarity. However, let's verify that this change doesn't break existing functionality.
✅ Verification successful
WORLD constant usage is properly integrated and tested, sensei! ✨
The verification shows that the WORLD
constant is correctly used across multiple contexts:
- Used in permission pair resolution with proper test coverage
- Properly handled in string formatting for writer/owner displays
- Correctly integrated with world diff functionality
- Has test assertions verifying its behavior
The special case handling maintains consistency with existing functionality and improves code clarity.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the world resource tag is handled correctly in existing code
# Search for any direct usage of "world" as a resource tag
rg -l 'resource_tag.*"world"'
# Search for any test cases involving world permissions
rg -l 'test.*world.*permission'
Length of output: 95
Script:
#!/bin/bash
# Let's check for WORLD constant usage and its context
rg -A 5 -B 5 'WORLD' bin/sozo/src/commands/auth.rs
# Check for any world-related permission tests or functionality
ast-grep --pattern 'fn $_ () {
$$$
WORLD
$$$
}'
# Look for any permission-related constants
rg -A 2 'const.*Permission'
Length of output: 2574
crates/sozo/ops/src/migrate/mod.rs (2)
172-199
: LGTM! Well-structured metadata handling implementation
The implementation is clean and follows good practices:
- Uses slices instead of &Vec for better efficiency
- Properly handles metadata changes
- Clear and maintainable logic flow
180-181
: 🛠️ Refactor suggestion
Add error handling for selector computation
Ohayo sensei! The selector computation could fail silently. Consider adding explicit error handling.
Apply this diff:
- let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag);
+ let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag)
+ .map_err(|e| anyhow!("Failed to compute selector for tag {}: {}", item.tag, e))?;
Likely invalid or redundant comment.
d5e1ea5
to
cdf4b4a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (9)
crates/dojo/world/src/services/ipfs_service.rs (3)
1-14
: Ohayo! The struct definition looks clean, sensei!The imports are well-organized and the struct is minimal. The documentation comment effectively explains the purpose.
Consider adding a code example to the struct documentation to show basic usage:
/// IPFS implementation of UploadService, allowing to upload data to IPFS. +/// +/// # Example +/// ``` +/// use dojo_world::config::IpfsConfig; +/// use dojo_world::services::IpfsService; +/// +/// let config = IpfsConfig::new("http://localhost:5001", "user", "pass"); +/// let service = IpfsService::new(config)?; +/// ``` pub struct IpfsService {
16-22
: Consider enhancing the Debug implementation, sensei!The current Debug implementation only prints "IPFS service" which might not be helpful for debugging. Consider including the IPFS URL (with credentials masked) for better debugging context.
Here's a suggested improvement:
impl std::fmt::Debug for IpfsService { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { - write!(f, "IPFS service")?; + write!(f, "IpfsService {{ url: {} }}", self.client.base_url())?; Ok(()) } }
1-65
: Excellent service abstraction design, sensei!The
IpfsService
implementation aligns perfectly with the PR objectives for handling world metadata uploads. The separation of concerns through theUploadService
trait allows for easy testing and potential alternative implementations.A few architectural considerations:
- Consider adding retry logic for failed uploads
- Think about implementing a caching layer to avoid re-uploading identical data
- Consider adding metrics/logging for monitoring upload performance
Would you like help implementing any of these architectural improvements?
crates/dojo/world/src/metadata/metadata_storage.rs (3)
39-47
: Enhance error handling with file-specific contextOhayo, sensei! While the error handling is functional, it could be more descriptive for debugging purposes.
Consider enhancing the error context:
async fn upload_uri(uri: &Option<Uri>, service: &mut impl UploadService) -> Result<Option<Uri>> { if let Some(Uri::File(path)) = uri { - let data = std::fs::read(path)?; + let data = std::fs::read(path) + .with_context(|| format!("Failed to read file: {}", path.display()))?; let uploaded_uri = Uri::Ipfs(service.upload(data).await?); Ok(Some(uploaded_uri)) } else { Ok(uri.clone()) } }
70-87
: Enhance error handling in upload_if_changedOhayo, sensei! The error handling in this method could be improved to provide better context about hash computation and comparison failures.
Consider adding more context to error handling:
async fn upload_if_changed( &self, service: &mut impl UploadService, current_hash: Felt, ) -> Result<Option<(String, Felt)>> where Self: std::hash::Hash, { - let new_hash = compute_metadata_hash(self); - let new_hash = Felt::from_raw([0, 0, 0, new_hash]); + let new_hash = compute_metadata_hash(self); + let new_hash = Felt::from_raw([0, 0, 0, new_hash]) + .context("Failed to convert hash to Felt")?; if new_hash != current_hash { - let new_uri = self.upload(service).await?; + let new_uri = self.upload(service) + .await + .context("Failed to upload changed metadata")?; return Ok(Some((new_uri, new_hash))); } Ok(None) }
91-116
: Consider reducing code duplication in metadata implementationsOhayo, sensei! The
WorldMetadata
andResourceMetadata
implementations share similar upload logic, which could be abstracted to reduce duplication.Consider creating a helper function for the common upload logic:
+async fn upload_metadata<T: serde::Serialize>( + meta: &mut T, + icon_uri: &Option<Uri>, + service: &mut impl UploadService +) -> Result<String> { + if let Some(icon_uri) = icon_uri { + let new_uri = upload_uri(&Some(icon_uri.clone()), service) + .await + .context("Failed to upload icon URI")?; + // Assuming there's a way to set icon_uri on T + // You'll need to define a trait for this + } + + let serialized = json!(meta).to_string(); + service.upload(serialized.as_bytes().to_vec()) + .await + .context("Failed to upload metadata") +}This would simplify both implementations and make future maintenance easier.
bin/sozo/src/commands/auth.rs (1)
681-681
: Consider adding validation for world resource permissions.Ohayo sensei! While the implementation looks correct, consider adding validation to ensure appropriate permission levels when dealing with world resources.
if self.resource_tag == "world" { + // Validate world resource permissions + if !is_valid_world_permission(&self.grantee_tag_or_address) { + return Err(anyhow!("Invalid permission level for world resource")); + } WORLDcrates/sozo/ops/src/migrate/mod.rs (2)
111-170
: Consider enhancing error handling and code organizationOhayo sensei! The method is well-structured, but here are some suggestions for improvement:
- Consider extracting the multicall logic into a separate helper method to reduce code duplication between lines 158-167.
- The error handling at line 160 could be more specific than converting to string.
Here's a suggested refactor for the multicall logic:
+ async fn execute_metadata_calls(&self, invoker: &mut Invoker<&A>, ui: &mut MigrationUi) -> anyhow::Result<()> { + if self.do_multicall() { + ui.update_text_boxed(format!("Uploading {} metadata...", invoker.calls.len())); + invoker.multicall().await.map_err(|e| anyhow!("Multicall failed: {}", e))?; + } else { + ui.update_text_boxed(format!( + "Uploading {} metadata (sequentially)...", + invoker.calls.len() + )); + invoker.invoke_all_sequentially().await.map_err(|e| anyhow!("Sequential calls failed: {}", e))?; + } + Ok(()) + }Then replace lines 158-167 with:
- if self.do_multicall() { - ui.update_text_boxed(format!("Uploading {} metadata...", invoker.calls.len())); - invoker.multicall().await.map_err(|e| anyhow!(e.to_string()))?; - } else { - ui.update_text_boxed(format!( - "Uploading {} metadata (sequentially)...", - invoker.calls.len() - )); - invoker.invoke_all_sequentially().await.map_err(|e| anyhow!(e.to_string()))?; - } + self.execute_metadata_calls(&mut invoker, ui).await?;
172-199
: Consider adding early returns for better readabilityOhayo sensei! The method could be more concise with early returns. If no metadata changes are detected, we could return early with an empty vector.
Here's a suggested refactor:
async fn upload_metadata_from_resource_config( &self, service: &mut impl UploadService, config: &[ResourceConfig], ) -> anyhow::Result<Vec<Call>> { + if config.is_empty() { + return Ok(vec![]); + } + let mut calls = vec![]; for item in config { let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag); let current_hash = self.diff.resources.get(&selector).map_or(Felt::ZERO, |r| r.metadata_hash()); let new_metadata = metadata_config::ResourceMetadata::from(item.clone()); let res = new_metadata.upload_if_changed(service, current_hash).await?; if let Some((new_uri, new_hash)) = res { calls.push(self.world.set_metadata_getcall(&ResourceMetadata { resource_id: selector, metadata_uri: ByteArray::from_string(&new_uri)?, metadata_hash: new_hash, })); } } Ok(calls) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (5)
crates/torii/types-test/Scarb.lock
is excluded by!**/*.lock
examples/simple/Scarb.lock
is excluded by!**/*.lock
examples/spawn-and-move/Scarb.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (24)
bin/sozo/src/commands/auth.rs
(5 hunks)bin/sozo/src/commands/dev.rs
(2 hunks)bin/sozo/src/commands/migrate.rs
(6 hunks)bin/sozo/src/commands/options/ipfs.rs
(1 hunks)crates/dojo-world/Cargo.toml
(1 hunks)crates/dojo/world/Cargo.toml
(1 hunks)crates/dojo/world/src/config/environment.rs
(2 hunks)crates/dojo/world/src/config/ipfs_config.rs
(1 hunks)crates/dojo/world/src/config/mod.rs
(1 hunks)crates/dojo/world/src/config/profile_config.rs
(6 hunks)crates/dojo/world/src/constants.rs
(1 hunks)crates/dojo/world/src/lib.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_storage.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/dojo/world/src/remote/events_to_remote.rs
(5 hunks)crates/dojo/world/src/services/ipfs_service.rs
(1 hunks)crates/dojo/world/src/services/mock_upload_service.rs
(1 hunks)crates/dojo/world/src/services/mod.rs
(1 hunks)crates/dojo/world/src/services/upload_service.rs
(1 hunks)crates/sozo/ops/Cargo.toml
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)crates/torii/graphql/Cargo.toml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (18)
- bin/sozo/src/commands/dev.rs
- bin/sozo/src/commands/migrate.rs
- bin/sozo/src/commands/options/ipfs.rs
- crates/dojo/world/Cargo.toml
- crates/dojo/world/src/config/environment.rs
- crates/dojo/world/src/config/ipfs_config.rs
- crates/dojo/world/src/config/mod.rs
- crates/dojo/world/src/config/profile_config.rs
- crates/dojo/world/src/constants.rs
- crates/dojo/world/src/lib.rs
- crates/dojo/world/src/metadata/metadata_test.rs
- crates/dojo/world/src/metadata/mod.rs
- crates/dojo/world/src/remote/events_to_remote.rs
- crates/dojo/world/src/services/mock_upload_service.rs
- crates/dojo/world/src/services/mod.rs
- crates/dojo/world/src/services/upload_service.rs
- crates/sozo/ops/Cargo.toml
- crates/sozo/ops/src/tests/migration.rs
🔇 Additional comments (12)
crates/torii/graphql/Cargo.toml (2)
40-40
: Ohayo! The feature change looks good, sensei! 🎉
The switch from "metadata" to "ipfs" feature aligns perfectly with our goal of improving metadata upload handling through IPFS integration.
40-40
: Let's verify the feature consistency across the workspace
We should ensure this feature change is consistently applied across all relevant workspace members.
✅ Verification successful
Feature configuration is consistent across workspace, sensei! ✨
The ipfs
feature for dojo-world
is consistently configured in both places where it's used with workspace inheritance:
- crates/torii/graphql/Cargo.toml
- crates/sozo/ops/Cargo.toml
No inconsistencies or missing configurations were found in the workspace.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistency of dojo-world features across workspace
# Search for dojo-world dependency configurations
echo "Checking dojo-world dependency configurations across workspace:"
rg --type toml 'dojo-world.*features' -A 2
# Check if ipfs-related dependencies are properly configured
echo -e "\nChecking for IPFS-related dependencies:"
rg --type toml 'ipfs.*=.*version' -A 1
Length of output: 718
Script:
#!/bin/bash
# Let's check the workspace Cargo.toml to see if dojo-world is defined there
echo "Checking workspace Cargo.toml for dojo-world configuration:"
rg --type toml -A 5 '\[workspace\.dependencies\].*dojo-world'
# Let's also check all Cargo.toml files that depend on dojo-world
echo -e "\nChecking all dojo-world dependencies across workspace:"
rg --type toml '^dojo-world\s*='
# Let's verify if there are any other IPFS-related features
echo -e "\nChecking for any other IPFS-related features:"
rg --type toml 'features.*ipfs'
Length of output: 894
crates/dojo-world/Cargo.toml (2)
54-54
: Verify dependency versions for security
Ohayo sensei! The IPFS feature introduces new dependencies. Let's ensure they're using secure versions.
✅ Verification successful
IPFS dependency is securely configured, sensei! ✨
The repository is using a pinned version of ipfs-api-backend-hyper
from a specific Git commit (af2c17f7), which is a good security practice. The dependency tree shows well-maintained dependencies including:
- hyper v0.14 with secure TLS implementation (hyper-rustls)
- Modern versions of critical dependencies (futures 0.3, http 0.2)
- Rustls for TLS with native certificates enabled
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for security advisories and latest versions
# Expected: No critical vulnerabilities in the dependencies
# Check for security advisories for ipfs-api-backend-hyper
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: CARGO, package: "ipfs-api-backend-hyper") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
# Check latest versions from crates.io
curl -s https://crates.io/api/v1/crates/ipfs-api-backend-hyper | jq '.crate.max_version'
Length of output: 882
Script:
#!/bin/bash
# Let's check the actual version specified in Cargo.toml and scan for any known vulnerabilities using cargo-audit
cat crates/dojo-world/Cargo.toml | rg "ipfs-api-backend-hyper"
# Let's also check dependencies of ipfs-api-backend-hyper for a better security assessment
curl -s https://crates.io/api/v1/crates/ipfs-api-backend-hyper/0.6.0/dependencies | jq '.'
# Check if there are any direct usages that might need security considerations
rg "ipfs-api-backend-hyper" -A 2 -B 2
Length of output: 5969
54-54
:
Ohayo! Breaking change alert: feature rename from metadata
to ipfs
The feature rename from metadata
to ipfs
better reflects its purpose but could break existing code. Please ensure all dependent crates are updated accordingly.
Let's check for any usage of the old feature name:
✅ Verification successful
Let's also check for any current usage of the new 'ipfs' feature to understand the impact:
Ohayo sensei! The feature rename is safe to proceed!
After careful investigation, I can confirm that there was no previous metadata
feature in the codebase. The occurrences of "metadata" in Cargo.toml files are only related to package metadata configurations (like package.metadata.release
and package.metadata.cargo-udeps.ignore
), not feature flags. The new ipfs
feature is properly defined and only used in conditional compilation for the IPFS service module.
The feature rename from metadata
to ipfs
is actually a new addition rather than a rename, making this a non-breaking change.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for usage of the old 'metadata' feature
# Expected: No occurrences of the old feature name in feature gates or Cargo.toml files
# Search for feature gates using the old name
rg '#\[cfg\(feature\s*=\s*"metadata"\)\]'
# Search for feature specifications in Cargo.toml files
fd -g 'Cargo.toml' --exec rg 'metadata"\s*]|metadata\s*='
Length of output: 101
Script:
#!/bin/bash
# Let's try a more comprehensive search for the old feature name
# Search for 'metadata' feature in Cargo.toml files
fd 'Cargo.toml$' -x grep -l "metadata"
# Search for any code that might be using the feature
rg -g '*.rs' 'metadata' -B2 -A2
# Check git history for the feature rename
git log -p --all --full-history -S "metadata" -- "*Cargo.toml"
Length of output: 142092
Script:
#!/bin/bash
# Search for current usage of 'ipfs' feature
rg -g '*.rs' '#\[cfg\(feature\s*=\s*"ipfs"\)\]' -B2 -A2
# Search for ipfs feature in Cargo.toml files
fd 'Cargo.toml$' -x grep -l "ipfs"
Length of output: 641
Script:
#!/bin/bash
# Let's check if there are any dependencies or code that rely on the old 'metadata' feature
# First, check all Cargo.toml files for any dependencies using the old feature name
fd 'Cargo.toml$' -x grep -A5 -B5 'metadata.*='
# Then check for any feature gates or conditional compilation in the code
rg -g '*.rs' 'metadata.*feature' -B2 -A2
# Finally, let's look at the git history for this specific feature rename
git log -p -- "*Cargo.toml" | grep -A5 -B5 'metadata.*='
Length of output: 91
Script:
#!/bin/bash
# Let's check the specific file and its dependencies more thoroughly
cat crates/dojo-world/Cargo.toml
# Check for any direct references to the old feature name in the workspace
rg -g '!target' -g '!*.lock' 'metadata' --type toml
# Look for any conditional compilation using either feature
rg -g '*.rs' -B2 -A2 '(?:#\[cfg\(feature\s*=\s*"(?:metadata|ipfs)"\)\]|#\[cfg_attr\(.*feature\s*=\s*"(?:metadata|ipfs)")'
Length of output: 2778
crates/dojo/world/src/services/ipfs_service.rs (2)
43-65
: 🛠️ Refactor suggestion
Consider adding input validation and improving URI handling, sensei!
The implementation is good, but could be enhanced in a few areas:
- The
upload
method should validate input data size to prevent excessive uploads - The
get
method's URI manipulation could be more robust
Here's a suggested improvement:
async fn upload(&mut self, data: Vec<u8>) -> Result<String> {
+ const MAX_UPLOAD_SIZE: usize = 50 * 1024 * 1024; // 50MB
+ if data.len() > MAX_UPLOAD_SIZE {
+ return Err(anyhow::anyhow!("Upload size exceeds maximum allowed size"));
+ }
let reader = Cursor::new(data);
let response = self
#[cfg(test)]
async fn get(&self, uri: String) -> Result<Vec<u8>> {
+ if !uri.starts_with("ipfs://") {
+ return Err(anyhow::anyhow!("Invalid IPFS URI format"));
+ }
let res = self
.client
- .cat(&uri.replace("ipfs://", ""))
+ .cat(&uri.strip_prefix("ipfs://").unwrap())
Let's check if there are any size limits defined elsewhere in the codebase:
24-41
: Fix documentation typos and consider credential handling, sensei!
The implementation looks solid, but there are a few points to address:
- Documentation has typos ("instanciate" should be "instantiate")
- Consider validating credentials before passing them to the client
Fix the documentation typos:
- /// Instanciate a new IPFS service with IPFS configuration.
+ /// Instantiate a new IPFS service with IPFS configuration.
///
/// # Arguments
/// * `config` - The IPFS configuration
///
/// # Returns
- /// A new `IpfsService` is the IPFS client has been successfully
- /// instanciated or a Anyhow error if not.
+ /// A new `IpfsService` if the IPFS client has been successfully
+ /// instantiated or an Anyhow error if not.
Let's verify if there are any credential validation patterns in the codebase:
✅ Verification successful
Fix documentation typos and validate credentials properly, sensei!
- The documentation typos should be fixed as suggested.
- The credentials are already being validated in
assert_valid()
which checks for:- Empty values in url, username, and password
- Valid URL format (must start with http:// or https://)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for credential validation patterns
rg -i "credentials?.*(validate|check|verify)"
Length of output: 47
Script:
#!/bin/bash
# Let's check the IpfsConfig implementation for credential validation
ast-grep --pattern 'impl IpfsConfig {
$$$
assert_valid($$$) {
$$$
}
$$$
}'
# Also check for any credential-related validation in the config
rg -B2 -A5 "assert_valid|validate.*credentials?"
# Let's also check the IpfsConfig struct definition
ast-grep --pattern 'struct IpfsConfig {
$$$
}'
Length of output: 173952
crates/dojo/world/src/metadata/metadata_storage.rs (1)
1-116
: Implementation successfully addresses the PR objectives
Ohayo, sensei! The implementation effectively solves the issues mentioned in #2096 and #1907 by:
- Implementing change detection through hashing
- Supporting metadata re-upload after failed deployments
- Adding proper error handling and context
The code is well-structured and maintainable, with room for minor improvements as suggested above.
bin/sozo/src/commands/auth.rs (4)
10-10
: LGTM! Clean import of the WORLD constant.
The import of WORLD
from dojo_world::constants
aligns with the PR objectives to improve world metadata handling.
421-428
: LGTM! Clean migration from Felt::ZERO to WORLD constant.
The change from Felt::ZERO
to WORLD
in permission retrieval improves code readability and maintainability.
792-792
: LGTM! Test case updated correctly.
The test case has been properly updated to use the new WORLD
constant instead of Felt::ZERO
.
309-314
: Verify the string representation consistency.
The string representation logic for world resources looks good, but let's ensure consistent handling across the codebase.
✅ Verification successful
Let me search for more instances of world resource string representations to ensure we have a complete picture.
String representation of world resources is consistent, sensei!
The code shows consistent handling of world resources across the codebase. In auth.rs, the pattern if r != &WORLD { format!("{:#066x}", r) } else { "World".to_string() }
is used consistently for world resource string representation. Other usages of WORLD and Felt::ZERO string conversions in the codebase serve different purposes and don't conflict with this pattern.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent world resource string representation
# Test: Search for other instances where world resources are converted to strings
rg -A 2 'if \S+ != &(WORLD|Felt::ZERO).*\{.*to_string\(\)'
Length of output: 467
Script:
#!/bin/bash
# Search for any other string representations of WORLD or similar constants
rg -A 2 'WORLD.*to_string|Felt::ZERO.*to_string'
# Also check for format! calls with WORLD
rg -A 2 'format!.*WORLD'
# Look for any other potential string conversions in auth.rs
rg -A 2 'to_string|format!' bin/sozo/src/commands/auth.rs
Length of output: 5787
crates/sozo/ops/src/migrate/mod.rs (1)
23-35
: LGTM! Imports are well organized.
The new imports appropriately support the metadata upload functionality.
cdf4b4a
to
0e51fb9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (7)
bin/sozo/src/commands/options/ipfs.rs (1)
30-42
: Ohayo! Add input validation for username and passwordThe implementation looks clean, but consider adding validation for:
- Username format (e.g., no special characters)
- Password strength requirements
- URL accessibility check
Here's a suggested enhancement:
pub fn config(&self) -> Option<IpfsConfig> { trace!("Retrieving IPFS config for IpfsOptions."); let url = self.ipfs_url.as_ref().map(|url| url.to_string()); - let username = self.ipfs_username.clone(); - let password = self.ipfs_password.clone(); + let username = self.ipfs_username.clone() + .filter(|u| is_valid_username(u)); + let password = self.ipfs_password.clone() + .filter(|p| is_strong_password(p)); if let (Some(url), Some(username), Some(password)) = (url, username, password) { Some(IpfsConfig { url, username, password }) } else { None } }crates/dojo/world/src/metadata/metadata_storage.rs (1)
92-102
: Handle partial upload failures gracefullyOhayo, sensei! The current implementation might leave metadata in an inconsistent state if one of the uploads fails (e.g., icon uploaded but metadata upload fails).
Consider implementing a two-phase upload:
- Upload all assets first and collect URIs
- Only update metadata if all assets were uploaded successfully
Example approach:
async fn upload(&self, service: &mut impl UploadService) -> Result<String> { // Phase 1: Upload all assets let uploads = futures::future::try_join_all(vec![ upload_uri(&self.icon_uri, service), upload_uri(&self.cover_uri, service), ]).await?; // Phase 2: Only if all assets uploaded successfully, update metadata let mut meta = self.clone(); meta.icon_uri = uploads[0].clone(); meta.cover_uri = uploads[1].clone(); let serialized = json!(meta).to_string(); service.upload(serialized.as_bytes().to_vec()).await }Also applies to: 107-115
crates/dojo/world/src/metadata/metadata_test.rs (2)
17-22
: Ohayo sensei! Consider making the test file path handling more robust.The
test_file_path
function could fail silently if the test data directory doesn't exist. Consider adding better error handling and validation.fn test_file_path(filename: &str) -> PathBuf { - fs::canonicalize( - PathBuf::from_str(&format!("./src/metadata/metadata_test_data/{}", filename)).unwrap(), - ) - .unwrap() + let path = PathBuf::from_str(&format!("./src/metadata/metadata_test_data/{}", filename)) + .expect("Failed to create path"); + fs::canonicalize(&path) + .unwrap_or_else(|_| panic!("Test data file not found: {}", path.display())) }
72-76
: Ohayo! Let's make the error handling more idiomatic, sensei.The error handling pattern is repeated. Consider using Rust's
Result
combinators for more concise and idiomatic code.- let (current_uri, current_hash) = if let Ok(Some(res)) = res { - res - } else { - panic!("Upload failed"); - }; + let (current_uri, current_hash) = res + .expect("Upload failed") + .expect("No upload result");Also applies to: 88-92
bin/sozo/src/commands/auth.rs (2)
681-681
: Semantic handling of world resource in PermissionPair.The use of
WORLD
constant into_selector_and_address
improves code clarity. However, consider adding a test case for invalid resource tags.Consider adding this test case:
#[test] fn test_permission_pair_to_selector_and_address() { // ... existing tests ... + + // Test invalid resource tag + let pair = PermissionPair { + resource_tag: "0xinvalid".to_string(), + grantee_tag_or_address: "0x456".to_string(), + }; + assert!(pair.to_selector_and_address(&contracts).is_err()); }
Line range hint
1-804
: Ohayo sensei! Here's a high-level architectural suggestion.The changes improve the handling of world resources by using a semantic constant. Consider documenting the significance of the
WORLD
constant and its usage patterns in the module-level documentation to help future maintainers.Add this documentation at the top of the file:
//! Authentication and permission management for Dojo worlds. //! //! This module handles the management of permissions for resources in a Dojo world. //! The special `WORLD` constant is used to represent world-level resources and permissions, //! which are fundamental to the Dojo engine's security model.crates/sozo/ops/src/migrate/mod.rs (1)
116-170
: Ohayo sensei! Consider adding retry mechanism and progress trackingThe implementation efficiently handles metadata uploads with proper error handling and multicall optimization. However, consider these improvements:
- Add a retry mechanism for failed uploads
- Implement progress tracking for individual uploads
Consider adding retry logic:
async fn upload_metadata( &self, ui: &mut MigrationUi, service: &mut impl UploadService, ) -> anyhow::Result<()> { ui.update_text("Uploading metadata..."); let mut invoker = Invoker::new(&self.world.account, self.txn_config); + const MAX_RETRIES: u32 = 3; + const RETRY_DELAY: Duration = Duration::from_secs(1); // world let current_hash = self.diff.resources.get(&WORLD).map_or(Felt::ZERO, |r| r.metadata_hash()); let new_metadata = WorldMetadata::from(self.diff.profile_config.world.clone()); - let res = new_metadata.upload_if_changed(service, current_hash).await?; + let res = retry::retry(retry::delay::Fixed::new(RETRY_DELAY) + .take(MAX_RETRIES), || async { + new_metadata.upload_if_changed(service, current_hash).await + }).await?;
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (5)
crates/torii/types-test/Scarb.lock
is excluded by!**/*.lock
examples/simple/Scarb.lock
is excluded by!**/*.lock
examples/spawn-and-move/Scarb.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (24)
bin/sozo/src/commands/auth.rs
(5 hunks)bin/sozo/src/commands/dev.rs
(2 hunks)bin/sozo/src/commands/migrate.rs
(6 hunks)bin/sozo/src/commands/options/ipfs.rs
(1 hunks)crates/dojo-world/Cargo.toml
(1 hunks)crates/dojo/world/Cargo.toml
(1 hunks)crates/dojo/world/src/config/environment.rs
(2 hunks)crates/dojo/world/src/config/ipfs_config.rs
(1 hunks)crates/dojo/world/src/config/mod.rs
(1 hunks)crates/dojo/world/src/config/profile_config.rs
(6 hunks)crates/dojo/world/src/constants.rs
(1 hunks)crates/dojo/world/src/lib.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_storage.rs
(1 hunks)crates/dojo/world/src/metadata/metadata_test.rs
(1 hunks)crates/dojo/world/src/metadata/mod.rs
(1 hunks)crates/dojo/world/src/remote/events_to_remote.rs
(5 hunks)crates/dojo/world/src/services/ipfs_service.rs
(1 hunks)crates/dojo/world/src/services/mock_upload_service.rs
(1 hunks)crates/dojo/world/src/services/mod.rs
(1 hunks)crates/dojo/world/src/services/upload_service.rs
(1 hunks)crates/sozo/ops/Cargo.toml
(1 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)crates/sozo/ops/src/tests/migration.rs
(4 hunks)crates/torii/graphql/Cargo.toml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- bin/sozo/src/commands/dev.rs
- bin/sozo/src/commands/migrate.rs
- crates/dojo-world/Cargo.toml
- crates/dojo/world/Cargo.toml
- crates/dojo/world/src/config/environment.rs
- crates/dojo/world/src/config/ipfs_config.rs
- crates/dojo/world/src/config/mod.rs
- crates/dojo/world/src/config/profile_config.rs
- crates/dojo/world/src/constants.rs
- crates/dojo/world/src/lib.rs
- crates/dojo/world/src/metadata/mod.rs
- crates/dojo/world/src/remote/events_to_remote.rs
- crates/dojo/world/src/services/ipfs_service.rs
- crates/dojo/world/src/services/mock_upload_service.rs
- crates/dojo/world/src/services/mod.rs
- crates/dojo/world/src/services/upload_service.rs
- crates/sozo/ops/Cargo.toml
- crates/sozo/ops/src/tests/migration.rs
- crates/torii/graphql/Cargo.toml
🔇 Additional comments (10)
bin/sozo/src/commands/options/ipfs.rs (2)
1-116
: Ohayo! Overall implementation looks solid with some security considerations
The implementation aligns well with the PR objectives for improving metadata handling. The code is well-structured with good test coverage. However, there are a few areas for improvement:
- Password security (critical)
- Input validation (good to have)
- Test environment cleanup (essential)
Once these are addressed, this will be a robust implementation, sensei!
22-26
:
Ohayo! Consider more secure password input methods
The current implementation exposes passwords via command-line arguments, which can be visible in process lists (ps aux). This security concern was previously raised and remains valid.
Consider implementing one of these alternatives:
- Interactive password prompt
- Password file
- Environment file
- Secret management service
Would you like me to provide an example implementation using a more secure method, sensei?
crates/dojo/world/src/metadata/metadata_storage.rs (1)
70-87
: Verify change detection behavior
Ohayo, sensei! Let's verify that the change detection logic works as expected for the PR objectives.
✅ Verification successful
Change detection behavior is properly implemented
Ohayo, sensei! After analyzing the test implementation, I can confirm that the change detection behavior is working correctly. The tests in metadata_test.rs
verify three key scenarios:
- Initial upload with zero hash
- No upload when hash matches
- Re-upload when hash differs
The test coverage shows proper implementation of the hash-based change detection logic, with both positive and negative test cases.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of metadata change detection
# Check for test coverage of change detection
echo "Checking test coverage for metadata change detection..."
rg -l "upload_if_changed.*current_hash" --type rust
# Look for potential test cases
echo "Looking for test cases..."
ast-grep --pattern 'fn $test_name() {
$$$
upload_if_changed($$$)
$$$
}'
# Check for error handling in tests
echo "Checking error handling in tests..."
rg "assert.*Err.*upload_if_changed" --type rust
Length of output: 529
Script:
#!/bin/bash
# Let's examine the test implementation in metadata_test.rs
echo "Checking metadata test implementation..."
cat crates/dojo/world/src/metadata/metadata_test.rs
# Let's also check the usage in migrate/mod.rs
echo "Checking migrate implementation..."
cat crates/sozo/ops/src/migrate/mod.rs
Length of output: 38662
crates/dojo/world/src/metadata/metadata_test.rs (2)
139-194
: Excellent test coverage, sensei! 🎯
The test thoroughly verifies:
- Initial upload behavior
- Hash-based change detection
- Content integrity after upload
- All metadata fields
1-194
: Implementation aligns perfectly with PR objectives! 🎯
The tests effectively validate that:
- Metadata is uploaded only when changed (fixes World metadatas no uploaded after fail #2096)
- Re-uploads work correctly after failures
- Content integrity is maintained
This implementation should successfully resolve the issues with world metadata uploads.
bin/sozo/src/commands/auth.rs (4)
10-10
: LGTM! Clean import of the WORLD constant.
The import of WORLD
from dojo_world::constants
improves code clarity by using a semantic constant instead of magic values.
421-428
: Enhanced permission retrieval for world resources.
The use of WORLD
constant in permission retrieval makes the code more explicit about its intent when dealing with world-level permissions.
792-792
: LGTM! Comprehensive test coverage for world constant.
The test case properly verifies the world resource handling using the WORLD
constant.
309-314
: Improved world resource handling in clone_permissions.
The replacement of string formatting logic with a conditional check against WORLD
constant enhances readability and maintainability. The consistent handling of world resources across the codebase reduces the chance of bugs.
Let's verify the consistent usage of the WORLD constant:
✅ Verification successful
Let me search for the actual usage of WORLD
constant to understand its role better.
Ohayo sensei! After analyzing the codebase, I can now provide a clear verification of the review comment.
Consistent WORLD constant usage confirmed across the codebase
The codebase shows consistent usage of WORLD
constant (defined as Felt::ZERO
) in the world contract implementation. The conditional check against WORLD
in the formatting logic is correctly aligned with the codebase's conventions:
- The
WORLD
constant is properly defined indojo_world::constants::WORLD
asFelt::ZERO
- The formatting pattern
if r != &WORLD { format!("{:#066x}", r) } else { "World".to_string() }
is consistent with how the world resource is handled throughout the codebase - This pattern is used in permission management, resource handling, and world contract interactions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining Felt::ZERO usage that might need to be replaced with WORLD
rg "Felt::ZERO" -A 2 -B 2
Length of output: 69340
Script:
#!/bin/bash
# Search for WORLD constant usage
ast-grep --pattern 'const WORLD: Felt = Felt::ZERO'
# Search for WORLD constant references
rg "WORLD" -B 2 -A 2
Length of output: 41127
crates/sozo/ops/src/migrate/mod.rs (1)
23-23
: LGTM! Well-organized imports
The new imports are properly organized and necessary for the metadata handling functionality.
Also applies to: 27-29, 33-33, 35-35
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job on that @remybar!
Just pushed 2 fixes:
- The world was always uploaded since the world was searched into
resources
, but it's not. It's directly set into thediff.world_info
. - The IPFS config on the
dojo_dev.toml
file wasn't correct, hence no upload.
As cairo 2.9
is out and some breaking changes there, I've set the version strictly to 2.8.4
to avoid cargo trying to use 2.9.0
(invalid semver on the upstream cairo repo).
Thank you again for the effort and work on this PR + documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (5)
examples/spawn-and-move/dojo_dev.toml (3)
6-17
: Ohayo! Consider enhancing model descriptions, sensei!While the model descriptions are clear, they could be more detailed to help other developers understand:
- What fields/attributes each model contains
- How they relate to each other
- Any constraints or validation rules
Example of more detailed descriptions:
-description = "Message sent by a player" +description = "Message sent by a player, containing the message content and timestamp" -description = "position of a player in the world" +description = "2D position (x,y coordinates) of a player in the world" -description = "move of a player in the world" +description = "Records player movement history with direction and timestamp"
18-20
: Enhance event description with emitted data details, sensei!The event description could be more specific about what data is emitted when a player moves.
-description = "when a player has moved" +description = "Emitted when a player moves, including player_id, from_position, to_position"
22-24
: Document available actions in contract description, sensei!The contract description should list the specific actions available to players.
-description = "set of actions for a player" +description = "Player actions including: move(direction), send_message(content)"crates/sozo/ops/src/migrate/mod.rs (1)
116-171
: Ohayo sensei! Consider pre-allocating the invoker calls vectorThe implementation is solid, but we can optimize memory allocation by pre-calculating the total number of potential calls.
- let mut invoker = Invoker::new(&self.world.account, self.txn_config); + let total_calls = 1 + // world + self.diff.profile_config.contracts.as_ref().map_or(0, |c| c.len()) + + self.diff.profile_config.models.as_ref().map_or(0, |m| m.len()) + + self.diff.profile_config.events.as_ref().map_or(0, |e| e.len()); + let mut invoker = Invoker::with_capacity(&self.world.account, self.txn_config, total_calls);examples/simple/manifest_dev.json (1)
Ohayo! The manifest files need synchronization, sensei!
The manifest_sepolia.json file is using outdated contract references that don't match the updated ones in manifest_dev.json. Please update the contract configurations in manifest_sepolia.json to match the new addresses and class hashes.
examples/simple/manifest_sepolia.json
: Update world class hash from0x2f92b70bd2b5a40ddef12c55257f245176870b25c7eb0bd7a60cf1f1f2fbf0e
to match manifest_dev.json🔗 Analysis chain
Line range hint
1255-1865
: Verify contract reference updates across the codebase, sensei!The contract configurations have been updated with new addresses and class hashes. Let's verify these changes are consistently referenced throughout the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify contract references are updated consistently # Search for old contract addresses in the codebase rg -l "0x7d32611d1b96df37b041f0a285bf340227cebc144d46c2f34cedd81e2fd5239" # Search for contract class hash references rg -l "0x2f92b70bd2b5a40ddef12c55257f245176870b25c7eb0bd7a60cf1f1f2fbf0e"Length of output: 185
Script:
#!/bin/bash # Let's check the content of manifest_sepolia.json to see if it needs updating cat examples/simple/manifest_sepolia.json | grep -A 5 -B 5 "0x2f92b70bd2b5a40ddef12c55257f245176870b25c7eb0bd7a60cf1f1f2fbf0e" # Also check for any other manifest files that might need updating fd manifestLength of output: 598
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (6)
Cargo.lock
is excluded by!**/*.lock
crates/torii/types-test/Scarb.lock
is excluded by!**/*.lock
examples/simple/Scarb.lock
is excluded by!**/*.lock
examples/spawn-and-move/Scarb.lock
is excluded by!**/*.lock
spawn-and-move-db.tar.gz
is excluded by!**/*.gz
types-test-db.tar.gz
is excluded by!**/*.gz
📒 Files selected for processing (7)
Cargo.toml
(1 hunks)bin/sozo/src/commands/migrate.rs
(6 hunks)bin/sozo/tests/test_data/policies.json
(1 hunks)crates/dojo/world/src/remote/events_to_remote.rs
(5 hunks)crates/sozo/ops/src/migrate/mod.rs
(2 hunks)examples/simple/manifest_dev.json
(7 hunks)examples/spawn-and-move/dojo_dev.toml
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- bin/sozo/src/commands/migrate.rs
- crates/dojo/world/src/remote/events_to_remote.rs
🔇 Additional comments (9)
examples/spawn-and-move/dojo_dev.toml (1)
35-37
:
The IPFS credentials should never be committed to version control. This is a serious security risk.
Recommended actions:
- Immediately rotate these IPFS credentials as they are now compromised
- Move credentials to environment variables or a secure secrets management system
- Update the example to use placeholder values
Apply this diff to use environment variables instead:
ipfs_config.url = "https://ipfs.infura.io:5001"
-ipfs_config.username = "2EBrzr7ZASQZKH32sl2xWauXPSA"
-ipfs_config.password = "12290b883db9138a8ae3363b6739d220"
+ipfs_config.username = "${IPFS_USERNAME}"
+ipfs_config.password = "${IPFS_PASSWORD}"
Also, please add a .env.example
file to show the required environment variables without real values.
bin/sozo/tests/test_data/policies.json (1)
3-4
:
Verify the upgrade permission distribution across multiple targets.
Ohayo! The presence of upgrade
permissions across multiple targets (0x72a9f5...
, 0x7e8a52...
, 0x50b149...
, and 0x52ee4d...
) requires careful review.
Multiple upgrade entry points increase the attack surface. Each target with upgrade permissions represents a potential vector for unauthorized contract modifications.
Consider:
- Implementing a time-lock mechanism for upgrades
- Requiring multi-signature approval for upgrade operations
- Documenting the specific scope and purpose of each upgrade target
Also applies to: 35-44, 127-128
✅ Verification successful
The initial scan shows upgrade functionality in the codebase, but we need to verify the specific policy distribution and permission model. This script will help us understand the upgrade permission structure and its implementation across different targets.
Ohayo! The initial scan shows multiple upgrade targets and basic upgrade functionality, but we need to verify if there are any security measures implemented around these upgrades. This script will help us understand the security model and event emission patterns around upgrades, sensei!
Ohayo sensei! My previous search failed due to incorrect file type specification. Let's try again with proper file extensions to find security measures, tests, events, and implementations related to the upgrade functionality!
Multiple upgrade targets are properly secured with role-based access control and event emission
Ohayo sensei! After analyzing the codebase, I can verify that the upgrade system has several security measures in place:
- Role-based access control through owner/writer permissions
- Event emission for all upgrades (WorldUpgraded, ContractUpgraded, ModelUpgraded, EventUpgraded)
- Validation checks for class hashes and permissions
- Comprehensive test coverage for upgrade scenarios including unauthorized attempts
The multiple upgrade targets are not a security concern as they are protected by:
- Ownership checks (
not_owner_upgrade
error handling) - Zero-address validation
- Schema and layout validation during upgrades
- Event logging for tracking changes
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any upgrade-related code patterns or configurations
# that might indicate the scope and impact of these upgrade permissions
echo "Checking upgrade-related patterns..."
rg -A 5 "upgrade.*contract" --glob '!*.json'
echo "Checking for upgrade-related configurations..."
fd -e toml -e yaml -x grep -l "upgrade" {} \;
Length of output: 16919
Script:
#!/bin/bash
# Let's check for upgrade-related policies and permissions in JSON files
echo "Checking upgrade policies in JSON files..."
rg -A 5 '"method":\s*"upgrade"' --type json
echo "Checking for upgrade targets..."
rg '"target"' --type json
# Check for any security or permission related files
echo "Checking for security configurations..."
fd -e json -e cairo -x grep -l "permission" {} \;
Length of output: 138653
Script:
#!/bin/bash
# Let's check for upgrade-related security implementations
echo "Checking for upgrade security patterns..."
rg -A 5 "upgrade.*security|security.*upgrade|timelock|multi.*sig" --type cairo
echo "Checking for upgrade-related tests..."
rg -A 5 "test.*upgrade" --type cairo
echo "Checking for upgrade-related events..."
rg -A 5 "event.*upgrade|upgrade.*event" --type cairo
Length of output: 529
Script:
#!/bin/bash
# Let's check for upgrade security patterns with correct file extensions
echo "Checking for upgrade security patterns..."
rg -A 5 "upgrade.*security|security.*upgrade|timelock|multi.*sig" --glob '*.rs' --glob '*.cairo'
echo "Checking for upgrade-related tests..."
rg -A 5 "test.*upgrade" --glob '*.rs' --glob '*.cairo'
echo "Checking for upgrade-related events..."
rg -A 5 "event.*upgrade|upgrade.*event" --glob '*.rs' --glob '*.cairo'
# Check for upgrade-related structs and implementations
echo "Checking for upgrade-related implementations..."
rg -A 5 "struct.*Upgrade|impl.*upgrade|fn.*upgrade" --glob '*.rs' --glob '*.cairo'
Length of output: 84311
Cargo.toml (1)
Line range hint 249-250
: Ohayo! Let's address that TODO comment, sensei! 📝
The ipfs-api-backend-hyper
dependency is using a git source with a specific revision, and there's a TODO comment about checking if this is still necessary. Consider investigating if a published version from crates.io could be used instead.
Let's check if there's a suitable published version:
crates/sozo/ops/src/migrate/mod.rs (3)
23-23
: LGTM! Well-organized imports
The new imports are properly organized and necessary for the metadata handling functionality.
Also applies to: 27-29, 33-33, 35-35
173-207
:
Handle selector computation error and optimize vector allocation
Ohayo sensei! Two improvements needed:
- The selector computation error is not handled properly
- Vector allocation could be optimized
Apply these improvements:
async fn upload_metadata_from_resource_config(
&self,
service: &mut impl UploadService,
config: &[ResourceConfig],
) -> anyhow::Result<Vec<Call>> {
- let mut calls = vec![];
+ let mut calls = Vec::with_capacity(config.len());
for item in config {
- let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag);
+ let selector = dojo_types::naming::compute_selector_from_tag_or_name(&item.tag)
+ .map_err(|e| anyhow!("Failed to compute selector for tag {}: {}", item.tag, e))?;
Likely invalid or redundant comment.
129-139
: Verify IPFS upload success and metadata integrity
Ohayo sensei! Consider adding verification for successful IPFS uploads and metadata integrity.
Run this script to verify IPFS uploads are working correctly:
✅ Verification successful
Let me analyze the IPFS upload verification based on the script results. I can see that:
- The code has proper error handling with
await?
- There are comprehensive tests in
metadata_test.rs
covering:- Upload success cases
- Hash comparison scenarios
- No-change scenarios (avoiding unnecessary uploads)
- Error cases with proper assertions
Let me verify one more aspect about the error handling implementation:
IPFS uploads and metadata integrity are properly handled
Ohayo sensei! After thorough verification, I can confirm that the IPFS upload implementation is robust and secure:
- Proper error handling with
context
annotations for detailed error messages - Hash-based change detection to prevent unnecessary uploads
- Comprehensive test coverage including success, no-change, and error scenarios
- Secure URI handling with validation for both file and IPFS URIs
- Proper error propagation using the
Result
type throughout the upload chain
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify IPFS upload functionality and metadata integrity
# Test: Check if IPFS-related code paths are properly tested
rg -A 5 "upload_if_changed|set_metadata_getcall"
# Test: Look for any error handling patterns in IPFS uploads
ast-grep --pattern 'upload_if_changed($_).await?'
Length of output: 5995
Script:
#!/bin/bash
# Check the implementation of upload_if_changed
rg -A 15 "impl.*MetadataStorage"
# Look for any custom error types related to uploads
rg -A 5 "Error|Result" crates/dojo/world/src/metadata/
Length of output: 5707
examples/simple/manifest_dev.json (3)
3-4
: Ohayo! World configuration changes look good, sensei!
The updated class hash and address values are properly formatted and align with the world metadata improvements.
95-98
: Nice addition of metadata_hash field, sensei!
The new field enhances the ResourceMetadata struct by adding hash verification capabilities, which directly supports the PR's goal of improving metadata handling.
1009-1013
: Excellent event structure update, sensei!
The addition of the hash field to MetadataUpdate event maintains consistency with the ResourceMetadata struct changes, enabling proper tracking of metadata updates.
* feat: update metadata page Update the Metadata page according to dojoengine/dojo#2691. * feat: add dojo profile config for IPFS --------- Co-authored-by: glihm <[email protected]>
…ojoengine#2691) * feat(sozo): upload world/resources metadata if they changed * inject ipfs service dependency * get ipfs credentials from env/command-line * add --fix option to rust_fmt.sh * after review * fix: ensure no upload if no change on the world * fix: fix warning message --------- Co-authored-by: glihm <[email protected]>
Description
Closes #2096, #1907.
Tests
Added to documentation?
Checklist
scripts/prettier.sh
,scripts/rust_fmt.sh
,scripts/cairo_fmt.sh
)scripts/clippy.sh
,scripts/docs.sh
)Summary by CodeRabbit
Release Notes
New Features
IpfsOptions
andIpfsConfig
structures.IpfsService
for uploading metadata to IPFS.ProfileConfig
to manage multiple resource configurations for models, contracts, and events.metadata_hash
to various structures for improved metadata handling.Improvements
MetadataUpdate
events to reflect new metadata structures.WorldMetadata
andResourceMetadata
structures to support additional metadata fields.Tests
Bug Fixes